Robert Sander [ml-linux-xfs@xxxxxxxxxxxxxxx] wrote:
> We will have a new RAID-System shipped the next weeks with a total
> cpacity of 1.6 TB. I want to run XFS (what else?) on it and have some
> config qeustions.
> The RAID consists of 12 160GB IDE-Disks connected via SCSI to the host
> computer. The host only sees one large SCSI disk.
> I think that making just one partition on that disk and running a
> mkfs.xfs without any special options will not produce optimal
> Should I create a separate logfile partition on the RAID? How large
> would it be? What are other options for mkfs.xfs that I should look
> Thanks for the answers. Any pointers to the FAQ are also appreciated.
A warning. All SCSI to IDE RAID Systems i know have non battery backuped
cache. In the case of a power outage all unwritten data (and filesystem
metadata) in the RAID cache is lost. This cause filesystem corruption.
Running a journaled fs with a volatile cache is not recomended.
I made some tests with a 460GB SCSI to IDE RAID with 128MB cache.
I run 30 simultaneous cp -R from the system disk to the RAID an switch off
the power of the RAID.
massive fs corruptions, even the the logrecovery on mounting failed.
xfs_repair shows tons of errors. i aborted the xfs_repair. it's unusable.
newer 2.4.18-xfs (the ones with more ordered writes):
mounting after the simulated power outage works. The filesystem was usable,
but i did not test it well. xfs_check shows a lot of errors but no serious
(xfs_check -s). xfs_repair runs for hours (and consume over a 1GB memory
ext3 with the default ordered mode:
the filesystem is working after the power outage. fsck reports no serious
error but a lot of bitmap differences and runs for hours too.
So all tests produced filesystem corruption. If dont write to much to the
RAID you maybe safe.
note: i only repeated the tests 2-3 times. So dont rely on the results
(works with minor corruptions).