Simon Matter wrote
I have made some performance tests in the last days just because I was
wondering how bad XFS performs on SoftRAID. You didn't tell us whether
you are using some kind of RAID so I expect you don't?
The XFS FAQ says that XFS performs slightly worse than ext2 on Soft
RAID1 and RAID5. I whish we could update the FAQ (Seth ?) to say it
performs really bad (if you don't care about your logdev)! My test
program used to run 40 minutes to complete on SoftRAID5 and without RAID
or Hardware RAID it was finished in 10 minutes. Ok, my setup was:
DELL PowerEdge 1400 with 4 U160 SCSI drives, DELL Raidadapter with 64M
cache (Megaraid) and onboard dual U160 Adaptec SCSI, PIII/800, 256MB
Ram.
My programm is a mix of hd, cpu and network load. Some hundred procs of
cp copying large amount of small files while a bonnie is running in
background while copying some amount of data via NFS and so on. As I
said mixed load. I have made this because when you just compare
different bonnie's or other benchmarks then you don't see possible
bottlenecks but in a dirty mixed test you may find them. The result, out
of my head, where:
XFS on Hardware RAID5 w/o write caching : ~10 min
XFS on Hardware RAID5 w write caching : ~13 min
EXT3 on Hardware RAID5 w/o write caching : ~13 min
XFS on Software RAID5 w/o write caching : ~42 min
EXT3 on Software RAID5 w/o write caching : ~12 min
XFS on Software RAID5 w/o write caching,
logdev on SoftRAID1 on the same disks : ~10 min
As I understand it we can say:
- Write caching does not always boost performance with XFS, and is very
dangerous as steve mentioned before.
- an external logdev can very much increase performance under some
sircumstances.
Sorry if my writing was just confusing but I liked to share what I found
out after stressing my spindles for many hours.
Simon
Ugly, isn't it. The XFS log has the nasty habit of doing unaligned
writes on any 512
byte boundary - a 31.5K write is not unusual. I did not realize is was
this bad on raid5
though - I knew it was worse, just forgotten how much! I think this is
due to the raid
code doing cache flushes in this case.
We have talked about adding some padding to the log, but it is an on
disk format change,
so not something to do lightly, if I find time I may do some experiments
with it.
There may be hope in 2.5, but I do not know if the raid5 code has been
converted to bio
structures yet.
Steve
|