> Steve Lord wrote:
> > snip ..........
> > OK, now to the crux of the matter, these mkfs options are not going to help
> > you achieve higher bandwidth, your hardware is going to play the biggest pa
> > in this, so it is all really a matter of budget. We have had a single file
> > doing streaming I/O at close to 100 Mbytes/sec on linux, however, that
> > involved a fiber channel connected JBOD running lvm on 8 10000 rpm scsi
> > drives. Going beyond this you need to look at multiple pci buses with
> > multiple controllers and lots of fun stuff like that. But I suspect this
> > is getting beyond what you are looking for ;-)
> This performance sounds great, and I would like to try to duplicate it.
> I also have fibre disks, so the hardware should be fine. I have some
> 1. What mkfs options did you use? You imply it doesn't matter much, that
> hardware and lvm stripe are more important.
As Martin Petersen who actually did this pointed out, he was using MD Raid0
not LVM for the best performance, he also stated this:
>> Modern drives seem to like 64-128KB I/Os. The test Steve mentioned
>> above was done with a 64KB stripe unit and consequently 512 KB stripe
The xfs mkfs program will detect the md or lvm volume and determine stripe
width and stripe unit values automatically, or you can set them with the sunit
and swidth options, see the man page for details, (note the units used).
The mkfs output reports swidth and sunit values used.
> 2. What benchmark shows the near 100MB/s bandwidth? What performance
> should I expect
> from a simple dd if=junk of=/dev/null bs=xxx, with xxx optimized for
> stripe width.
> 3. What kind of streaming write performance did you get?
If you want a more flexible program to try things out with then try getting
lmdd from the lmbench package at http://www.bitmover.com/lmbench you can
add the -D_GNU_SOURCE option to the cc options and it will get build with
direct I/O support direct=1