On Wed, 10 Dec 2008, Bill Davidsen wrote:
Justin Piszcz wrote:
I would expect you, as an experienced tester, to have done this measurement
Someone should write a document with XFS and barrier support, if I recall,
in the past, they never worked right on raid1 or raid5 devices, but it
appears now they they work on RAID1, which slows down performance ~12
I don't think it means much if this is what you did.
l1:~# /usr/bin/time tar xf linux-18.104.22.168.tar 0.15user 1.54system
0:13.18elapsed 12%CPU (0avgtext+0avgdata 0maxresident)k
Before doing any disk test you need to start by dropping cache, to be sure
the appropriate reproducible things happen. And in doing a timing test, you
need to end with a sync for the same reason.
0inputs+0outputs (0major+325minor)pagefaults 0swaps
l1:~# /usr/bin/time tar xf linux-22.214.171.124.tar
0.14user 1.66system 2:39.68elapsed 1%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+324minor)pagefaults 0swaps
echo 1 >/proc/sys/vm/drop_caches
time bash -c "YOUR TEST; sync"
This will give you a fair shot at being able to reproduce the results, done
on an otherwise unloaded system.
Bill Davidsen <davidsen@xxxxxxx>
"Woe unto the statesman who makes war without a reason that will still
be valid when the war is over..." Otto von Bismark
Roughly the same for non-barriers:
# bash -c '/usr/bin/time tar xf linux-126.96.36.199.tar'
0.15user 1.51system 0:12.95elapsed 12%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (4major+320minor)pagefaults 0swaps
For barriers I cannot test that right now but it most likely will be around the
same as well.