Possible to preallocate files that always begin/end on stripe unit/width?
Green Guy
greentech3000 at gmail.com
Fri Sep 3 15:12:30 CDT 2010
I am working on an app that will write to preallocated files. I can control
how big the files are (as long as they are between 4-6GB) and the amount of
data sent with each write() call.
I have a 3.2TB virtual drive that I need to use as much of as possible but
performance is the number one concern.
The system is an 8 disk RAID5 with a 256k stripe. Based on this, I am using:
mkfs.xfs -b size=4096 -l version=2,sunit=512 -d su=256k,sw=7 -f /dev/sdb
meta-data=/dev/sdb isize=256 agcount=32, agsize=26679232
blks
= sectsz=512 attr=0
data = bsize=4096 blocks=853735424, imaxpct=25
= sunit=64 swidth=
blks, unwritten=1
naming =version 2 bsize=4096
log =internal log bsize=4096 blocks=32768, version=2
= sectsz=512 sunit=64 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
and
mount -t xfs -o sunit=512,swidth=3584,inode64,nobarrier,logbufs=8 /dev/sdb
/mnt
Based on what I have read, it appears then my "most optimal" write/read size
would be 1835008 bytes (3584 *512).
So based on that, I want to preallocate files somewhere between 4-6GB each
that will always begin and end on the stripe unit and stripe width and be a
multiple of 1835008.
I know that XFS metadata such as the log, AG info, etc also need to be
accounted for, but I am not sure the best way to determine how much space
they will take.
I assume that the fact that, once I set the number of files on the disk,
they will not change/expand must be an advantage, but I am unsure how to
leverage it.
I have tried file sizes in several different multiples of 1835008, but for
all of them, bmap -vp has flags 00011.
How can I determine the correct file size and preallocate them in a way
where they always begin/end on stripe unit/width taking the filesystem
overhead into account?
Note: the log needs to stay on the same fs.
Thanks
g3k
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://oss.sgi.com/pipermail/xfs/attachments/20100903/54ec301e/attachment.htm>
More information about the xfs
mailing list