[Top] [All Lists]

Re: XFS and RAID5

To: linux-xfs@xxxxxxxxxxx
Subject: Re: XFS and RAID5
From: Robin Humble <rjh@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
Date: Mon, 18 Jun 2001 15:55:36 +1000 (EST)
In-reply-to: <> from "Seth Mos" at Jun 18, 2001 07:14:56 AM
Sender: owner-linux-xfs@xxxxxxxxxxx
Seth writes:
>At 23:31 17-6-2001 +0200, Knuth Posern wrote:
>>Thanks for your fast and long answer!
>> > not suffered data as of yet. (knock on wood). I have no expierence about
>> > XFS over Raid5. I have the number of disks for it but unfortunately not
>>Did you hear something about XFS over Software-RAID5?
>I ment XFS over software Raid5. Hardware raid practically always works
>unless the hardware raid driver is broken.

I've avoided entering this thread since we're currently not running
software RAID5 but thought I'd stick up my paw anyway 'cos we ran it for
a fair while and will happily run it again.

Around the time of the 2.4.3 kernel we used XFS over software RAID5 for
a month or so before a disk died and we didn't bother replacing it -
we've been using 420G (7 disks) of RAID0 since with zero problems.
RAID5 seemed ok and we sorted out any initial performance problems
as we found them with the super-responsive XFS people on this list.

Hopefuly we'll be firing up a NFS + software RAID5 + gigabit ethernet
box within the next couple of weeks, and expect that XFS will be the
only valid choice for such a filesystem. Especially as we're at the
large end of file sizes - I expect all writes to be >= 4G.

>> > The only problems with md is running in degraded mode. But it does work.
>>md is one of the XFS-utils? - I didn't look at them yet...
>It's not, md is the kernel driver. When your raid misses one disk it will
>run in degraded mode. This when XFS will perform worse. When you replace
>the disk the resyncing was not always op to speed.

yeah, RAID5 rebuilding on 480G/8disks took maybe 8 hours (this is
back in march - might be better now?), but only 1hr if you could
leave the filesystem unmounted and XFS could thus avoid changing the
block sizes in the kernel and md layer all the time...


<Prev in Thread] Current Thread [Next in Thread>