On 10/18/2014 01:03 AM, Stan Hoeppner wrote:
> On 10/09/2014 04:13 PM, Dave Chinner wrote:
> ...
>>> I'm told we have 800 threads writing to nearly as many files
>>> concurrently on a single XFS on a 12+2 spindle RAID6 LUN.
>>> Achieved data rate is currently ~300 MiB/s. Some of these are
>>> files are supposedly being written at a rate of only 32KiB every
>>> 2-3 seconds, while some (two) are ~50 MiB/s. I need to determine
>>> how many bytes we're writing to each of the low rate files, and
>>> how many files, to figure out RMW mitigation strategies. Out of
>>> the apparent 800 streams 700 are these low data rate suckers, one
>>> stream writing per file.
>>>
>>> Nary a stock RAID controller is going to be able to assemble full
>>> stripes out of these small slow writes. With a 768 KiB stripe
>>> that's what, 24 seconds to fill it at 2 seconds per 32 KiB IO?
>>
>> Raid controllers don't typically have the resources to track
>> hundreds of separate write streams at a time. Most don't have the
>> memory available to track that many active write streams, and those
>> that do probably can't proritise writeback sanely given how slowly
>> most cachelines would be touched. The fast writers would simply tune
>> over the slower writer caches way too quickly.
>>
>> Perhaps you need to change the application to make the slow writers
>> buffer stripe sized writes in memory and flush them 768k at a
>> time...
>
> All buffers are now 768K multiples--6144, 768, 768, and I'm told the app
> should be writing out full buffers. However I'm not seeing the throughput
> increase I should given the amount that the RMWs should have decreased,
> which, if my math is correct, should be about half (80) the raw actuator seek
> rate of these drives (7.2k SAS). Something isn't right. I'm guessing it's
> the controller firmware, maybe the test app, or both. The test app backs off
> then ramps up when response times at the controller go up and back down. And
> it's not super accurate or timely about it. The lowest interval setting
> possible is 10 seconds. Which is way too high when a controller goes into
> congestion.
>
> Does XFS give alignment hints with O_DIRECT writes into preallocated files?
> The filesystems were aligned at make time w/768K stripe width, so each
> prealloc file should be aligned on a stripe boundary. I've played with the
> various queue settings, even tried deadline instead of noop hoping more LBAs
> could be sorted before hitting the controller. Can't seem to get a
> repeatable increase. I've nr_requests at 524288, rq_affinity 2,
> read_ahead_kb 0 since reads are <20% of the IO, add_random 0, etc. Nothing
> seems to help really.
Some additional background:
Num. Streams = 350
WRITING:
Num. Write Threads = 100
Avg. Write Rate = 72 KiB/s
Avg. Write Intvl = 10666.666 ms
Num. Write Buffers = 426
Write Buffer Size = 768 KiB
Write Buffer Mem. = 327168 KiB
Group Write Rate = 25200 KiB/s
Avg. Buffer Rate = 32.812 bufs/s
Avg. Buffer Intvl. = 30.476 ms
Avg. Thread Intvl. = 3047.600 ms
The 350 streams are written to 350 preallocated files in parallel. Yes, a seek
monster. Writing without AIO currently. I'm bumping the rate to 2x during the
run but that isn't reflected above. The above is the default setup. The app
can't dump the running setup. The previous non buffer aligned config used
160KB write buffers.
Stan
|