Steve Lord wrote:
> snip ..........
> OK, now to the crux of the matter, these mkfs options are not going to help
> you achieve higher bandwidth, your hardware is going to play the biggest part
> in this, so it is all really a matter of budget. We have had a single file
> doing streaming I/O at close to 100 Mbytes/sec on linux, however, that
> involved a fiber channel connected JBOD running lvm on 8 10000 rpm scsi
> drives. Going beyond this you need to look at multiple pci buses with
> multiple controllers and lots of fun stuff like that. But I suspect this
> is getting beyond what you are looking for ;-)
This performance sounds great, and I would like to try to duplicate it.
I also have fibre disks, so the hardware should be fine. I have some
1. What mkfs options did you use? You imply it doesn't matter much, that
hardware and lvm stripe are more important.
2. What benchmark shows the near 100MB/s bandwidth? What performance
should I expect
from a simple dd if=junk of=/dev/null bs=xxx, with xxx optimized for
3. What kind of streaming write performance did you get?
> So first thing to do is work out what bandwidth you need, and then look at
> the sustained transfer rates of various disks, scsi is probably still going
> to work better than ide for a configuration involving several drives.
> You should probably build an lvm volume striped across several drives, the
> best stripe width is probably going to depend on how the application does
> I/O, but it depends on how much bandwidth you need to squeeze out.
> > Thanks for your time and efforts.
> > Warm Regards,
> > C.G.Senthilkumar.
U.S. Bureau of the Census