[Top] [All Lists]

Re: XFS over LVM over md RAID

To: xfs@xxxxxxxxxxx
Subject: Re: XFS over LVM over md RAID
From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
Date: Fri, 10 Sep 2010 17:19:54 -0500
In-reply-to: <4C8AA62A.9020704@xxxxxxxxxxx>
References: <4C89668E.6010800@xxxxxxxxxxx> <20100910013026.GA24409@dastard> <4C899816.6030506@xxxxxxxxxxx> <4C8A3F8F.4000704@xxxxxxxxxxx> <4C8AA62A.9020704@xxxxxxxxxxx>
User-agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv: Gecko/20100825 Thunderbird/3.1.3
Richard Scobie put forth on 9/10/2010 4:42 PM:

> In the future this lv will be grown in multiples of  256K chunk, 16
> drive RAID6 arrays, so am I correct in thinking that the sunit/swidth
> parameter can stay the same as it is expanded?

What is the reasoning behind adding so many terabytes under a single

Do you _need_ all of it under a single mount point?  If not, or even if
you do, for many reasons, it may very well be better to put a single
filesystem directly on each RAID6 array without using LVM in the middle
and simply mount each filesystem at a different point, say:


This method can minimize damage and downtime when an entire array is
knocked offline.  We just had a post yesterday where a SATA cable was
kicked loose and took 5 drives down of a 15 drive md RAID6 set, killing
the entire filesystem.  If that OP had setup 3x5 drive arrays with 3
filesystems, the system could have continued to run in a degraded
fashion, depending on his application data layout across the
filesystems.  If done properly, you lose an app or two, not all of them.

This method also eliminates xfs_growfs performance issues such as what
you're describing because you're never changing the filesystem layout
when adding new arrays to the system.

In summary, every layer of complexity added to the storage stack
increases the probability of failure.  As my grandmother was fond of
saying, "Don't put all of your eggs in one basket."  It was salient
advice on the farm 80 years ago, and it's salient advice today with high


<Prev in Thread] Current Thread [Next in Thread>