[Top] [All Lists]

Re: Xfs delaylog hanged up

To: Dave Chinner <david@xxxxxxxxxxxxx>
Subject: Re: Xfs delaylog hanged up
From: Spelic <spelic@xxxxxxxxxxxxx>
Date: Wed, 24 Nov 2010 14:12:32 +0100
Cc: xfs@xxxxxxxxxxx
In-reply-to: <20101124002023.GA22876@dastard>
References: <4CEAC412.9000406@xxxxxxxxxxxxx> <20101122232929.GJ13830@dastard> <4CEBA2D5.2020708@xxxxxxxxxxxxx> <20101123204609.GW22876@dastard> <4CEC3CB8.8000509@xxxxxxxxxxxxxxxxx> <20101124002023.GA22876@dastard>
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv: Gecko/20100713 Thunderbird/3.0.6
On 11/24/2010 01:20 AM, Dave Chinner wrote:

512MB of BBWC backing the disks. The BBWC does a much better job of
reordering out-of-order writes than the Linux elevators because
512MB is a much bigger window than a couple of thousand 4k IOs.

Hmmm very interesting...
so you are using a MD or DM raid-0 above a SATA controller with a BBWC?
That would probably be a RAID controller used as SATA because I have never seen SATA controllers with a BBWC. I'd be interested in the brand if you don't mind.

Also I wanted to know... the requests to the drives are really only 4K in size for linux? Then what purpose do the elevators' merges have? When the elevator merges two 4k requests doesn't it create an 8k request for the drive?

Also look at this competitor's link:
post #9
these scalability patches submit larger i/o than 4k. I can confirm that from within iostat -x 1 (I can't understand what he means with "bypasses the buffer cache layer" though, does it mean it's only for DIRECTIO? it does not seem to me). When such large requests go into the elevator, are they broken up into 4K requests again?

Thank you

<Prev in Thread] Current Thread [Next in Thread>