[Top] [All Lists]

Re: XFS: Abysmal write performance because of excessive seeking (allocat

To: stan@xxxxxxxxxxxxxxxxx
Subject: Re: XFS: Abysmal write performance because of excessive seeking (allocation groups to blame?)
From: Stefan Ring <stefanrin@xxxxxxxxx>
Date: Sat, 14 Apr 2012 13:30:11 +0200
Cc: xfs@xxxxxxxxxxx
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=v8YrWvpIgX+pHpzU+PYRFi5huxjl50lZUMY/UdTcfi8=; b=a83cSaa6qh+IwagA5fKCYM1WKcgJZlUsEQ1ZCH36WlY6w9ISjKDpYCRqf6HZJUAaP2 hsIWUW8FktWB2VYLE5DjJAcfo+fGEbC8sFdyR0RDRNv8DHAa48YBqyCsMoKVMmZHM6uy tI4mz22v3vPXY5RWcurLCLnBRn6Gz1bSYFVbQK2bYnxwc/61aZ96OFQ1fdyXkJe9NhCM K8ZZ+Os3JjAn4DiXJm2mTTjftiIhaSET104dC79QSaMG1+VyhmJKTEGFfNyP9wPH9fIZ CPA8Qu/SnPEPgoZWabMMyD/I9KpI5Gb4a/t6Y03IHItWlruzqTFFBugfwPjlFRKVvkUw lf7g==
In-reply-to: <4F892808.2040003@xxxxxxxxxxxxxxxxx>
References: <CAAxjCEwBMbd0x7WQmFELM8JyFu6Kv_b+KDe3XFqJE6shfSAfyQ@xxxxxxxxxxxxxx> <20350.9643.379841.771496@xxxxxxxxxxxxxxxxxx> <20350.13616.901974.523140@xxxxxxxxxxxxxxxxxx> <CAAxjCEzkemiYin4KYZX62Ei6QLUFbgZESdwS8krBy0dSqOn6aA@xxxxxxxxxxxxxx> <4F7F7C25.8040605@xxxxxxxxxxxxxxxxx> <20120407104912.44881be3@xxxxxxxxxxxxxx> <4F81F5FD.1090809@xxxxxxxxxxxxxxxxx> <20120408234555.695e291f@xxxxxxxxxxxxxx> <4F827341.2000607@xxxxxxxxxxxxxxxxx> <20120409144558.6072c1eb@xxxxxxxxxxxxxx> <CAAxjCEw-WZ9AtEJTZ6eS9+mjm+yh=e_19aJqxUN-ABRp8r3ZyQ@xxxxxxxxxxxxxx> <4F892808.2040003@xxxxxxxxxxxxxxxxx>
On Sat, Apr 14, 2012 at 9:32 AM, Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx> wrote:
> On 4/13/2012 2:36 PM, Stefan Ring wrote:
>>> Let's rerun it with files cached (the machine has 16 GB RAM, so
>>> every single file must be cached):
>>> # time tar xf test.tar
>>> real    0m50.842s
>>> user    0m0.809s
>>> sys     0m13.767s
>> That’s about the same time I’m getting on a fresh (non-fragmented)
>> file system with the RAID 6 volume.
> What configuration are you running right now Stefan?  You said you went
> back to XFS due to the EXT4 lockups, but I can't recall what RAID config
> you put underneath it this time.

RAID 6 4+2, LVM (single volume), 32kb stripe size (=> full stripe:
128kb), agcount=4

Except for the stripe size, the same config I had originally. The only
instance of really poor behavior is with the (artificially) fragmented
free space.

I have moved everything elsewhere for a while, so I can once again do
some testing that involves destroying and rebuilding everything.

<Prev in Thread] Current Thread [Next in Thread>