deleting 2TB lots of files with delaylog: sync helps?
Dave Chinner
david at fromorbit.com
Thu Sep 2 02:01:08 CDT 2010
On Thu, Sep 02, 2010 at 12:37:39AM -0500, Stan Hoeppner wrote:
> Dave Chinner put forth on 9/1/2010 1:44 AM:
>
> > 4p VM w/ 2GB RAM with the
> > disk image on a hw-RAID1 device make up of 2x500Gb SATA drives (create
> > and remove 800k files):
>
> > FSUse% Count Size Files/sec App Overhead
> > 2 800000 0 54517.1 6465501
> > $
> >
> > The same test run on a 8p VM w/ 16Gb RAM, with the disk image hosted
> > on a 12x2TB SAS dm RAID-0 array:
> >
> > FSUse% Count Size Files/sec App Overhead
> > 2 800000 0 51409.5 6186336
>
> Is this a single socket quad core Intel machine with hyperthreading
> enabled?
No, It's a dual socket (8c/16t) server.
> That would fully explain the results above. Looks like you
> ran out of memory bandwidth in the 4 "processor" case. Adding phantom
> CPUs merely made them churn without additional results.
No, that's definitely not the case. A different kernel in the
same 8p VM, 12x2TB SAS storage, w/ 4 threads, mount options "logbsize=262144"
FSUse% Count Size Files/sec App Overhead
0 800000 0 39554.2 7590355
4 threads with mount options "logbsize=262144,delaylog"
FSUse% Count Size Files/sec App Overhead
0 800000 0 67269.7 5697246
http://userweb.kernel.org/~dgc/shrinker-2.6.36/fs_mark-2.6.36-rc3-4-thread-delaylog-comparison.png
Top chart is CPu usage, 2nd chart is disk iops (purple is write),
thrid chart is disk bandwidth (purple is write), and the bottom
chart is create rate (yellow) and unlink rate (green).
More information about the xfs
mailing list