xfs
[Top] [All Lists]

Re: Deleting files with extended attributes is dead slow

To: Bernd Schubert <bernd.schubert@xxxxxxxxxxxxxxxxxx>
Subject: Re: Deleting files with extended attributes is dead slow
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Thu, 18 Aug 2011 12:08:48 +1000
Cc: Christoph Hellwig <hch@xxxxxxxxxxxxx>, linux-xfs@xxxxxxxxxxx
In-reply-to: <4E4BFCB5.4010808@xxxxxxxxxxxxxxxxxx>
References: <j23qs9$1c3$1@xxxxxxxxxxxxxxx> <20110812204746.GB30615@xxxxxxxxxxxxx> <20110816161357.GA18201@xxxxxxxxxxxxx> <4E4BBC98.7020501@xxxxxxxxxxxxxxxxxx> <20110817170251.GB28650@xxxxxxxxxxxxx> <4E4BFCB5.4010808@xxxxxxxxxxxxxxxxxx>
User-agent: Mutt/1.5.21 (2010-09-15)
On Wed, Aug 17, 2011 at 07:39:01PM +0200, Bernd Schubert wrote:
> On 08/17/2011 07:02 PM, Christoph Hellwig wrote:
> >On Wed, Aug 17, 2011 at 03:05:28PM +0200, Bernd Schubert wrote:
> >>>(squeeze-x86_64)fslab2:~# xfs_bmap -a 
> >>>/mnt/xfs/Bonnie.29243/00000/00000027faJxifNb0n
> >>>/mnt/xfs/Bonnie.29243/00000/00000027faJxifNb0n:
> >>>        0: [0..7]: 92304..92311
> >>
> >>(Sorry, I have no idea what "0: [0..7]: 92304..9231" is supposed to
> >>tell me).
> >
> >It means that you are having an extent spanning 8 blocks for xattr
> >storage, that map to physical blocks 92304 to 9231 in the filesystem.
> >
> >It sounds to me like your workload has a lot more than 256 bytes of
> >xattrs, or the underlying code is doing something rather stupid.
> 
> Well, the workload I described here is a controlled bonnie test, so
> there cannot be more than 256 bytes (unless there is a bug in the
> code, will double check later on).
> 
> >
> >>Looking at 'top' and 'iostat -x' outout, I noticed we are actually
> >>not limited by io to disk, but CPU bound. If you should be
> >>interested, I have attached 'perf record -g' and 'perf report -g'
> >>outout, of the bonnie file create (create + fsetfattr() ) phase.
> >
> >It's mostly spending a lot of time on copying things into the CIL
> >buffers, which is expected and intentional as that allows for additional
> >parallelity.  I you'd switch the workload to multiple intances doing
> >the create in parallel you should be able to scale to better numbers.
> 
> I just tried to bonnies in parallel and that didn't improve
> anything. FhGFS code has several threads anyway. But it would be
> good, if the underlying file system wouldn't take all the CPU
> time...

XFS directory algorithms are significantly more complex than ext4.
They trade off CPU usage for significantly better layout and
scalability at large sizes. i.e. CPU costs less than IO so we burn
more CPU to reduce IO. You don't see the benefits of that until
directories start to get large (e.g. > 100k entries) and you are
doing cold cache lookups.

> >>xfs:
> >>mkfs.xfs -f -i size=512 -i maxpct=90  -l lazy-count=1 -n size=64k /dev/sdd

What is the output of this command?

> >Do 64k dir blocks actually help you with the workload?  They also tend
> 
> Also just tested, with or without doesn't improve anything.

Right, 64k directory blocks make a difference on cold cache
traversals and lookups by flattening the btrees. They also make a
difference in create/unlink performance once you get over a few
million files in the one directory (once again due to reduced IO).

> >to do a lot of useless memcpys in their current form, although these
> >didn't show up on your profile.  Did you try using a larger inode size
> >as suggested in my previous mail?
> 
> I just tried and now that I understand the xfs_bmap output, it is
> interesting to see, that an xattr size up to 128 byte does not need
> an extent + blocks, but 256 byte do have one extent and 8 blocks
> even with an inode size of 2K. xfs_info tells me that isize=2048 was
> accepted. I didn't test any sizes in between 128 and 256 byte yet.
> Now while I can set the data/xattr size for the bonnie test to less
> than 256 byte, that is not so easy with our real target FhGFS ;)

That tells me your filesystem is either not using dynamic attribute
fork offsets or that code is broken. The output of the above mkfs
command will tell us what attribute fork behaviour is expected, so 
which of the two cases you are seeing.

Also, what kernel are you testing on?

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>