xfs
[Top] [All Lists]

Re: Booting off XFS, lilo corruption?

To: linux-xfs@xxxxxxxxxxx
Subject: Re: Booting off XFS, lilo corruption?
From: "brett holcomb" <brettholcomb@xxxxxxxxxxx>
Date: Mon, 28 Jul 2003 09:08:32 -0400
In-reply-to: <Pine.LNX.4.56.0307272142230.13820@xxxxxxxxxxxxxxxxxx>
Sender: linux-xfs-bounce@xxxxxxxxxxx
Yes, I have it on two systems running the Gentoo distribution. Works like a champ!

On Mon, 28 Jul 2003 13:48:02 +0100 (BST)
 Gordon Henderson <gordon@xxxxxxxxxx> wrote:

Is anyone using XFS with LILO to boot off?

I've been using XFS for a short while now, but always just on the big data partitions on various servers I've built, keeping root, /usr on ext2 so I can use my debian recovery CD if ever required, but this time I thought I'd try an all-XFS system and have been having major headaches trying to
get it to boot off an XFS partition.

Heres the scoop:

create a raid1 set, mkfs -t xfs it, then mount it under /mnt
  copy / onto it via cd / ; find -xdev | cpio -pm /mnt
Edit /mnt/etc/fstab and /mnt/etc/lilo.conf to do the right things
  run lilo -r /mnt

This is something I've done many times to get / onto a raid0 device with ext2. (Debian install doesn't support this directly - yet)

At this point, the XFS is corrupt on it:

  lion:/# umount /mnt
  lion:/# mount /dev/md1 /mnt
  mount: you must specify the filesystem type
  lion:/# mount -t xfs /dev/md1 /mnt
mount: wrong fs type, bad option, bad superblock on /dev/md1,
       or too many mounted file systems
  lion:/# xfs_check /dev/md1
  xfs_check: unexpected XFS SB magic number 0xfae84200
  bad superblock magic number fae84200, giving up


Is Lilo incompatable with XFS?

Complete sequence of events:

lion:/# uname -a
Linux lion 2.4.21-ac4 #3 Sun Jul 27 18:48:19 BST 2003 i686 unknown
lion:/# cat /etc/issue
Debian GNU/\s 3.0 \n \l

lion:/# cat /usr/local/src/xfsprogs/VERSION
#
# This file is used by configure to get version information
#
PKG_MAJOR=2
PKG_MINOR=3
PKG_REVISION=9
PKG_BUILD=0

lion:/# mkfs -t xfs -f /dev/md1
meta-data=/dev/md1 isize=256 agcount=8, agsize=31121 blks
         =                       sectsz=512
data = bsize=4096 blocks=248968, imaxpct=0 = sunit=1 swidth=2 blks, unwritten=0
naming   =version 2              bsize=4096
log =internal log bsize=4096 blocks=1200, version=1 = sectsz=512 sunit=1 blks realtime =none extsz=8192 blocks=0, rtextents=0

lion:/# mount /dev/md1 /mnt
lion:/# find . -xdev | cpio -pm /mnt
240549 blocks

lion:/# vi /mnt/etc/lilo.conf
lion:/# vi /mnt/etc/fstab

(in lilo.conf:
  boot=/dev/md1
  root=/dev/md1
  raid-extra-boot=/dev/hda,/dev/hdc

in fstab:

/dev/md1 / xfs errors=remount-ro 0 1

lion:/# lilo -r /mnt
Warning: using BIOS device code 0x80 for RAID boot blocks
Added Linux *
Added Linux-1
Added Linux-2.4.21-0
Added Linux-2.4.18
Added Linux-Orig
The boot record of  /dev/md1  has been updated.
The boot record of  /dev/hda  has been updated.
Warning: /dev/hdc is not on the first disk
The boot record of  /dev/hdc  has been updated.

lion:/# ls /mnt
archive boot dev floppy initrd lost+found opt root tmp var bin cdrom etc home lib mnt proc sbin usr
lion:/# umount /mnt
lion:/# mount /dev/md1 /mnt
mount: you must specify the filesystem type
lion:/# mount -t xfs /dev/md1 /mnt
mount: wrong fs type, bad option, bad superblock on /dev/md1,
       or too many mounted file systems


I don't think that raid1 has anything to do with it, as I can duplicate the results exactly when I stopped the RAID and just used /dev/hda2 on its own. I've also tried it the other way round - I moved root onto /dev/md1, (hda2, hdc2) and tired to make it work on the very first partition - both with raid1 and just a single drive and I get the same results every time.

I feel I'm missing something obvious, but according to

 http://www.tldp.org/HOWTO/Linux+XFS-HOWTO/x154.html
and
 http://oss.sgi.com/projects/xfs/xfsroot.html

it ought to "just work".

Any clues or observations would be appreciated

Thanks,

Gordon




<Prev in Thread] Current Thread [Next in Thread>