On Wed, Apr 15, 2015 at 04:00:16PM -0400, J. Bruce Fields wrote:
> On Wed, Apr 15, 2015 at 03:56:14PM -0400, J. Bruce Fields wrote:
> > On Wed, Apr 15, 2015 at 03:32:02PM -0400, Anna Schumaker wrote:
> > > I just ran some more tests comparing the directio case across
> > > different filesystem types. These tests used three 1G files: 100%
> > > data, 100% hole, and mixed file with alternating 4k data and hole
> > > segments. The mixed case seems to be consistently slower compared to
> > > NFS v4.1, and I'm at a loss for anything I could do to make it faster.
> > > Here are my numbers:
> > Have you tried the implementation we discussed that always returns a
> > single segment covering the whole requested range, by treating holes as
> > data if necessary when they don't cover the whole range?
> > (Also: I assume it's the same as before, but: when you post test
> > results, could you repost if necessary:
> > - what the actual test is
> > - what the hardware/software setup is on client and server
> > so that we have reproduceable results for posterity's sake.)
> > Interesting that "Mixed" is a little slower even before READ_PLUS.
> > And I guess we should really report this to ext4 people, looks like they
> > may have a bug.
> FWIW, this is what I was using to test SEEK_HOLE/SEEK_DATA and map out
> holes on files on my local disk. Might be worth checking whether the
> ext4 slowdowns are reproduceable just with something like this, to rule
> out protocol problems.
Wheel reinvention. :)
$ rm -f /mnt/scratch/bar
$ for i in `seq 20 -2 0`; do
> sudo xfs_io -f -c "pwrite $((i * 8192)) 4096" /mnt/scratch/bar
$ sudo xfs_io -c "seek -ar 0" /mnt/scratch/bar