On Mon, 28 Jul 2003, Keith Owens wrote:
> Lilo and XFS coexist but only if lilo writes to the start of the disk,
> _NOT_ the start of the XFS partition. XFS keeps its superblock at the
> start of each partition, if anything overwrites that superblock that it
> breaks the XFS filesystem. All systems have XFS / and lilo works fine,
> with boot=/dev/hda, not boot=/dev/hda1.
Ah right. Thanks also to Juri Haberland.
So, I moved things about a little, then in lilo.conf tried:
(md0 is a s/w RADI1 of /dev/hda1 and /dev/hdc1)
to see if that worked, and it did. So I re-edited the lilo.conf to update
/dev/hdc's MBR and that seemed OK too.
To test it, I halted it, unplugged the hda drive power and rebooted. The
box booted of hdc and the s/w raid1 did what it was supposed to do and
life was fine. It's now re-building all the arrays, and when it's done
I'll unplug the hcd drive and reboot just to doubly check. (and as I type
this, it finished and I've just done the 2nd test and it's fine, so I'm
So I guess when I install a new kernel, I just have to run lilo twice with
each of /dev/hda and /dev/hdc selected (Which is I guess what the
raid-extra-boot is doing when it thinks you have a boot=/dev/md0 device)
> BTW,. you should be able to run xfs_repair on the broken filesystem and
> recover your data, xfs_repair will seach for a secondary superblock and
> reconstruct the one that lilo stamped on. Since this is /, you will
> need to boot an emergency system such as the install CD booted with
> 'linux rescue' then run xfs_repair on md1.
That did actually work! I've never had to xfs_repair anything (yet) so it
was a good test. (Although it was a clone of with working / off md0 so I
didn't actually lose anything, but a good test anyway)
Now to build another kernel with no ext2 in it and see if I can fit it
onto a floppy to use as a rescue disk...