Hi,
* On Tue, Jan 31, 2012 at 09:52:10PM +0000, Brian Candler
<brian@xxxxxxxxxxxxxx> wrote:
On Tue, Jan 31, 2012 at 09:52:05AM -0500, Christoph Hellwig wrote:
You don't just read a single file at a time but multiple ones, don't
you?
It's sequential at the moment, although I'll do further tests with the -c
(concurrency) option to bonnie++
Try playing with the following tweaks to get larger I/O to the disk:
a) make sure you use the noop or deadline elevators
b) increase /sys/block/sdX/queue/max_sectors_kb from its low default
c) dramatically increase /sys/devices/virtual/bdi/<major>:<minor>/read_ahead_kb
Thank you very much: I will do further tests with these.
Is the read_ahead_kb knob aware of file boundaries? That is, is there any
risk that if I set it too large it would read useless blocks past the end of
the file?
The read_ahead_kb knob is used the by memory subsystem
readahead code to set the initial readahead to scale from (it
uses a dynamic scaling window). It is set by default based on
device readahead value (probably obtained in a way similar to
hdparm -I).
Setting it higher will be beneficial for sequential workloads
and the risk you mentioned is not there since it file
boundary aware -- check
http://lxr.linux.no/linux+*/mm/readahead.c#L151 for more
details.
Regards,
Brian.
_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs
Regards,
--
Raghavendra Prabhu
GPG Id : 0xD72BE977
Fingerprint: B93F EBCB 8E05 7039 CD3C A4B8 A616 DCA1 D72B E977
www: wnohang.net
pgpoCP3RXuwA7.pgp
Description: PGP signature
|