xfs
[Top] [All Lists]

Re: separate log and structure from user data device?

To: xfs@xxxxxxxxxxx
Subject: Re: separate log and structure from user data device?
From: Jan Wagner <jwagner@xxxxxxxxxxx>
Date: Tue, 20 Jun 2006 14:34:10 +0300 (EEST)
Sender: xfs-bounce@xxxxxxxxxxx
On Mon, 19 Jun 2006, Nathan Scott wrote:
> On Fri, Jun 16, 2006 at 02:38:19PM +0300, Jan Wagner wrote:
> > But for some reason at least the above xfs_io command does not work, I get
> >
> > # mkdir /i1/inherit
> > # xfs_io -c 'chattr +t' -c 'lsattr -v' /i1/inherit
> > /i1/inherit: Is a directory
> > ...
> > "Is a directory", any ideas?
>
> Oh, your xfs_io is a bit dated, Tim improved the behaviour in this
> area awhile back.  You will need the -r command line option with that
> version, IIRC.

Ok, pulled the cvs source for xfsprogs, now the above works 8-))

Quite a nice feature.


> > # xfs_io -fR /i1/testfile -c 'pwrite -b 4m 0 2g'
> > wrote 2147483648/2147483648 bytes at offset 0
> > 2.000 GiB, 512 ops; 00:00:05.645958 (362.737 MiB/sec and 90.6843 ops/sec)
> >
> > write 360MiB/s
> >
> > # xfs_io -fR /i1/testfile -c 'pread -b 4m 2g'
> > read 2147483648/2147483648 bytes at offset 0
> > 2.000 GiB, 512 ops; 00:00:39.156626 (52.303 MiB/sec and 13.0757 ops/sec)
> >
> > # xfs_io -fR /i1/testfile -c 'pread 4m 2g'
> > read 2147483648/2147483648 bytes at offset 4194304
> > 2.000 GiB, 524288 ops; 00:00:27.229922 (75.211 MiB/sec and 19254.1132
> > ops/sec)
>
> These last two are not what you want, I think, esp. the very last one;
> the -b option specific the buffer size.  In the last case you're using
> 1k instead of 4m buffers (hence the large number of ops).  But the 2nd
> last case doesn't suffer that - it doesnt give an offset, but looks like
> xfs_io is defaulting to zero there.
>
> Since you're doing buffered reads/writes, your read numbers there may be
> being influenced by writeout.  Try using the pwrite command with -W and/
> or unmount+mount between benchmark runs.

Hmm, could be.
Now experimented with that also, with 'unmount' + 'mount', and even
'sync', but it still keeps giving the same kind of figures. Also tested
on a different PC (Intel 945G, vs earlier nForce4, all software  RAID0,
same kernel version Debian 2.6.16-7). The realtime inherit bit was set
for the /i1 directory. Also tried moving the XFS file system from hda6 to
hda1, with the rt subvol still on /dev/md0, to see if that would improve
performance, but it did not.

jwagner@warp:~/cvs/xfs/xfs-cmds/xfsprogs/io$ ./xfs_io -f /i1/testfile -c 
'pwrite -b 4m 0 2g -W'; sync
wrote 2147483648/2147483648 bytes at offset 0
2.000 GiB, 512 ops; 0:00:08.00 (235.874 MiB/sec and 58.9685 ops/sec)

jwagner@warp:~/cvs/xfs/xfs-cmds/xfsprogs/io$ ./xfs_io -f /i1/testfile -c 'pread 
0 2g'
read 2147483648/2147483648 bytes at offset 0
2.000 GiB, 524288 ops; 0:00:46.00 (43.653 MiB/sec and 11175.2537 ops/sec)

For the pwrite writing xfs_io eats ~50% CPU with ~50% wait. For pread
reading the load by xfs_io is only ~1% with wait ~50%. At least per
'top'.


Without any realtime subvolume at all, reading is faster than writing. And
definitely faster than reading from a realtime subvolume.

jwagner@warp:~/cvs/xfs/xfs-cmds/xfsprogs/io# ./xfs_io -f /i1/testfile -c 
'pwrite -b 4m 0 2g -W'; sync
wrote 2147483648/2147483648 bytes at offset 0
2.000 GiB, 512 ops; 0:00:09.00 (208.292 MiB/sec and 52.0731 ops/sec)

jwagner@warp:~/cvs/xfs/xfs-cmds/xfsprogs/io# ./xfs_io -f /i1/testfile -c 'pread 
0 2g'
read 2147483648/2147483648 bytes at offset 0
2.000 GiB, 524288 ops; 0:00:08.00 (233.079 MiB/sec and 59668.2130 ops/sec)


Maybe as you got similar read results for the realtime as I am getting
above only for not-realtime, maybe you were using the XFS CVS source in
your kernel? (Might have to try that, too...)

Anyway, this all is mostly just "FYI". ;-)

Will have to experiment more, I guess :-)


Thanks again for the pread/pwrite benchmarking tips!

 - Jan


<Prev in Thread] Current Thread [Next in Thread>