[Top] [All Lists]

Re: in 3.7 kernel, how does 1GB page tables for kernel pagetables affect

To: Dave Chinner <david@xxxxxxxxxxxxx>
Subject: Re: in 3.7 kernel, how does 1GB page tables for kernel pagetables affect XFS?
From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
Date: Sun, 20 Jan 2013 03:33:08 -0600
Cc: Linda Walsh <xfs@xxxxxxxxx>, xfs-oss <xfs@xxxxxxxxxxx>
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <20130120004638.GZ2498@dastard>
References: <50FAF860.3000702@xxxxxxxxx> <20130119231644.GX2498@dastard> <50FB3265.8060506@xxxxxxxxx> <20130120004638.GZ2498@dastard>
Reply-to: stan@xxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (Windows NT 5.1; rv:17.0) Gecko/20130107 Thunderbird/17.0.2
On 1/19/2013 6:46 PM, Dave Chinner wrote:
> On Sat, Jan 19, 2013 at 03:55:17PM -0800, Linda Walsh wrote:

>>      All that talk about RAIDs recently, got me depressed a bit
>> when I realize that while I can get fast speeds, type speeds in seeking
>> around are about 1/10-1/20th the speed...sigh.
>>      Might that indicate that I should go with smaller RAIDS with more
>> spindles?  I.e. instead of 3 groups of RAID5 striped as 0, go for 4-5 groups
>> of RAID5 striped as a 0?  Just aligning the darn things nearly takes a rocket
>> scientist!  But then start talking about multiple spindles and optimizing
>> IOP's...ARG!...;-)  (If it wasn't challenging, I'd find it boring...)...
> Somebody on the list might be able to help you with this - I don't
> have the time right now as I'm deep in metadata CRC changes...

I have time Dave.  Hay Linda, if you're to re-architect your storage,
the first thing I'd do is ditch that RAID50 setup.  RAID50 exists
strictly to reduce some of the penalties of RAID5.  But then you find
new downsides specific to RAID50, including the alignment issues you

Briefly describe your workload(s), the total capacity you have now, and
(truly) need now and project to need 3 years from now.  Provide the
model# of your current LSI RAID HBA if you intend to keep/redeploy it,
and the make/model# of the server and/or external JBOD chassis if
present, and your current drives.  I'll then provide a recommendation(s)
of better potential solutions that may/not require additional hardware.
 If it is needed, I'll recommend vendor specific hardware if you like
that will plug into your existing gear, or I can provide information on
new dissimilar brand storage gear.  And of course I'll provide necessary
Linux and XFS configuration information optimized to the workload and
hardware.  I'm not trying to consult here, just providing

In general, yes, more spindles will always be faster if utilized
properly.  But depending on your workload(s) you might be able to fix
your performance problems by simply moving your current array to non
parity RAID10, layered stripe over RAID1 pairs, concat, etc, thus
eliminating the RMW penalty entirely.  You'll need more drives to
maintain the same usable capacity, but as a consequence you wind up with
even more spindles, thus more performance.  And of course making any
such changes will require a dump/restore before/after blowing away the
LSI config and creating/initializing the new non parity array.


<Prev in Thread] Current Thread [Next in Thread>