[PATCH] xfs/013: allow non-write fsstress operations in background workload
Dave Chinner
david at fromorbit.com
Tue Jun 17 18:55:46 CDT 2014
On Tue, Jun 03, 2014 at 02:28:49PM -0400, Brian Foster wrote:
> It has been reported that test xfs/013 probably uses more space than
> necessary, exhausting space if run against a several GB sized ramdisk.
> xfs/013 primarily creates, links and removes inodes. Most of the space
> consumption occurs via the background fsstress workload.
>
> Remove the fsstress -w option that suppresses non-write operations. This
> slightly reduces the storage footprint while still providing a
> background workload for the test.
>
> Signed-off-by: Brian Foster <bfoster at redhat.com>
This change makes the runtime blow out on a ramdisk from 4s to over
ten minutes on my test machine. Non-ramdisk machines seem to be
completely unaffected.
I was going to say "no, bad change", but I noticed that my
spinning disk VMs weren't affected at all. Looking more closely,
xfs/013 is now pegging all 16 CPUs on the VM. The profile:
- 60.73% [kernel] [k] do_raw_spin_lock
- do_raw_spin_lock
- 99.98% _raw_spin_lock
- 99.83% sync_inodes_sb
sync_inodes_one_sb
iterate_supers
sys_sync
tracesys
sync
- 32.76% [kernel] [k] delay_tsc
- delay_tsc
- 98.43% __delay
do_raw_spin_lock
- _raw_spin_lock
- 99.99% sync_inodes_sb
sync_inodes_one_sb
iterate_supers
sys_sync
tracesys
sync
OK, that's a kernel problem, not a problem with the change in the
test...
/me goes and dusts off his "concurrent sync scalability" patches.
Cheers,
Dave.
--
Dave Chinner
david at fromorbit.com
More information about the xfs
mailing list