[Top] [All Lists]

Re: [PATCH 00/27] xfs: current patch queue for 3.11

To: Ben Myers <bpm@xxxxxxx>
Subject: Re: [PATCH 00/27] xfs: current patch queue for 3.11
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Thu, 13 Jun 2013 11:58:36 +1000
Cc: xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <20130612141743.GV20932@xxxxxxx>
References: <1371032567-21772-1-git-send-email-david@xxxxxxxxxxxxx> <20130612141743.GV20932@xxxxxxx>
User-agent: Mutt/1.5.21 (2010-09-15)
On Wed, Jun 12, 2013 at 09:17:43AM -0500, Ben Myers wrote:
> Hey Dave,
> On Wed, Jun 12, 2013 at 08:22:20PM +1000, Dave Chinner wrote:
> > Thoughts, comments, flames?
> Do you have any performance numbers recorded for the block queue plugging for
> bulkstat and the new inode create transaction?

The new inode create transaction doesn't change performance on my
test rigs. It significantly reduces log traffic under heavey create
workloads (up to 50% lower). However, given the reduction is only
from 60MB/s down to 30MB/s at 110,000 inodes/s being created,  log
performance is not a performance limiting factor on any of my test

That said, the reason for the change is not so much for immediate
improvements in inode create performance.  Want to allocate a stripe
width of inodes at a time?  The new transaction can do that

Ordered buffers allow all sorts of interesting things to be done -
do you want to add an ext3/4 style data=ordered mode? We can do that
with ordered buffers.  Synchronous writes of remote attribute data?
Ordered buffers can be used to make that async and driven by AIL

Want to use intent-based logging for operations rather than physical
object logging? Ordered log items and metadata stamped with the last
modification LSN are necessary, and with this icreate transaction we
end up with all the pieces we need to do this....

And for bulkstat, performance differences were documented in
this email where I found the problem:


It took a mulithreaded bulkstat from being IO bound at 450,000
inodes/s @ 220MB/s and 27,000 IOPS to being CPU bound at 750,000
inodes/s @ 350MB/s and 14000 IOPS.

And given that it increased IO sizes from 8k to 16k for 256 byte
inodes and to 32k for 512 byte inodes, that is going to increase
performance on any busy filesystem simply through the fact that
bulkstat IOPS overhead goes down by a factor of 2/4/8/16 depending
on inode size.....


Dave Chinner

<Prev in Thread] Current Thread [Next in Thread>