[Top] [All Lists]

Re: XFS: Abysmal write performance because of excessive seeking (allocat

To: stan@xxxxxxxxxxxxxxxxx
Subject: Re: XFS: Abysmal write performance because of excessive seeking (allocation groups to blame?)
From: Stefan Ring <stefanrin@xxxxxxxxx>
Date: Tue, 10 Apr 2012 22:43:51 +0200
Cc: Linux fs XFS <xfs@xxxxxxxxxxx>
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=6sgO1LcQ/FlW00U7MigddWzMS2rmj5XNQ0nXoBBuPNs=; b=LHHMiOSTODehJ/zMtMwFYGobebayqodX8xyT/9gihIasrqgPgVMTQY12FoPOHyG4m+ XCE8Zmqvo7VZsxOoG1dgB+sfihVkXiYCZd1w3uwQU5+4W8pmPbmAUXteRL8JNpk53VLl y1efrdPcD8bRWiBdr9l53DYC+gsecg4aCMdC9ekXbV6xT4Dn/PlItAaYlMZBXq02BDSz N/6PB2OZi02XJzHoiauB0X2A642TsOR+yyjrUe0RvUBY+bex6hOJOEXDWd92jqcSmdcN +ua/v4ON429NGvoNbgnFq84jvV3cioewOX9B4Y94ZFTDpAe7yj8HYUCA9gQ6AdWPUJqC TGag==
In-reply-to: <4F849817.4060102@xxxxxxxxxxxxxxxxx>
References: <CAAxjCEwBMbd0x7WQmFELM8JyFu6Kv_b+KDe3XFqJE6shfSAfyQ@xxxxxxxxxxxxxx> <20350.9643.379841.771496@xxxxxxxxxxxxxxxxxx> <20350.13616.901974.523140@xxxxxxxxxxxxxxxxxx> <CAAxjCEzkemiYin4KYZX62Ei6QLUFbgZESdwS8krBy0dSqOn6aA@xxxxxxxxxxxxxx> <4F7F7C25.8040605@xxxxxxxxxxxxxxxxx> <CAAxjCEyJW1b4dbKctbrgdWjykQt8Hb4Sw1RKdys3oUsehNHCcQ@xxxxxxxxxxxxxx> <4F8055E4.1000808@xxxxxxxxxxxxxxxxx> <CAAxjCEz8TpRvjvbuYPp1xf9X2HwskN5AuPak62R5Jhkg+mmFHA@xxxxxxxxxxxxxx> <4F8372DC.7030405@xxxxxxxxxxxxxxxxx> <CAAxjCEx14NrgUatB349vx8h0kCxE=cS7d4DLLPnjwB035tB3Hw@xxxxxxxxxxxxxx> <4F849817.4060102@xxxxxxxxxxxxxxxxx>
> What was the location of the KVM images you were copying?  Is it
> possible the source device simply slowed down?  Or network congestion if
> this was an NFS copy?

Piped via ssh from another host. No, everything was completely idle otherwise.

>> So I threw XFS back in, restarted the restore, and it went very
>> smoothly while still providing acceptable interactivity.
> It's nice to know XFS "saved the day" but I'm not so sure XFS deserves
> the credit here.  The EXT4 driver itself/alone shouldn't cause the lack
> of responsiveness behavior you saw.  I'm guessing something went wrong
> on the source side of these file copies, given your report of dropping
> to 30-40MB/s on the writeout.

Maybe it shouldn’t, but something sure did. And the circumstances seem
to point at ext4. Since the situation persisted for minutes after I
had stopped the transfer, it cannot possibly have been related to the

I have a feeling that with appropriate vm.dirty_ratio tuning (and
probably related settings), I could have remedied this. But that’s
just one more thing I’d have to tinker with just to get to get
acceptable behavior out of this machine. I don’t mind if I don’t get
top-notch performance out of the box, but this is simply too much. I
don’t want to be expected to hand-tune every damn thing.

>> XFS is not a panacea (obviously), and it may be a bit slower in many
>> cases, and doesn’t seem to cope well with fragmented free space (which
>> is what this entire thread is really about),
> Did you retest fragmented freespace writes with the linear concat or
> RAID10?  If not you're drawing incorrect conclusions due to not having
> all the facts.

Yes, I did this. It performed very well. Only slightly slower than on
a completely empty file system.

<Prev in Thread] Current Thread [Next in Thread>