xfs
[Top] [All Lists]

Re: Verify filesystem is aligned to stripes

To: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
Subject: Re: Verify filesystem is aligned to stripes
From: Emmanuel Florac <eflorac@xxxxxxxxxxxxxx>
Date: Fri, 26 Nov 2010 09:16:22 +0100
Cc: xfs@xxxxxxxxxxx
In-reply-to: <4CEEE9BC.2030401@xxxxxxxxxxxxxxxxx>
Organization: Intellique
References: <4CED5BFC.8000906@xxxxxxxxxxxxx> <20101125054607.GM13830@dastard> <4CEE0995.9030900@xxxxxxxxxxxxxxxxx> <20101125101537.GD12187@dastard> <4CEEE9BC.2030401@xxxxxxxxxxxxxxxxx>
Le Thu, 25 Nov 2010 16:57:00 -0600 vous écriviez:

> Looking at the stripe size, which is equal to 64 sectors per array
> member drive (448 sectors total), how exactly is a sub 4KB mail file
> (8 sectors) going to be split up into equal chunks across a 224KB RAID
> stripe?

It won't, it will simply end on one drive (actually one mirror).
However because the mirrors are striped together, all drives in the
array will be sollicited in my experience, that's why you need at least
as many writing threads as there are stripes to reach the top IOPS. In
your case, writing 56 4K files simultaneously will effectively write on
all drives at once, hopefully (depends upon the filesystem allocation
policy though).

>  Does 220KB of the stripe merely get wasted? 

It's not wasted, it just remains unallocated. What's wasted is
potential IO performance.

What appears from the benchmarks I ran along the year is that anyway
you turn it, whatever caching, command tag queuing and reordering
your're using, a single thread can't reach maximal IOPS throughput on
an array, i. e. writing on all drives simultaneously; a single thread
writing to the fastest RAID 10 with 4K or 8K IOs can't do much better
than with a single drive, 200 to 300 IOPS for a 15k drive.

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |   <eflorac@xxxxxxxxxxxxxx>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

<Prev in Thread] Current Thread [Next in Thread>