[Top] [All Lists]

Re: How to pre-allocate files for sequential access?

To: xfs@xxxxxxxxxxx
Subject: Re: How to pre-allocate files for sequential access?
From: troby <Thorn.Roby@xxxxxxxxxxxxx>
Date: Tue, 10 Apr 2012 12:12:48 -0700 (PDT)
In-reply-to: <33564834.post@xxxxxxxxxxxxxxx>
References: <33564834.post@xxxxxxxxxxxxxxx>
Thanks all for your help. Due to unrelated collateral damage within the
database software, I had the "opportunity" to rebuild the filesystems using
some of your suggestions. I set up external log devices for the two busiest
filesystems (RAID5 for the row data, a RAID10 on 4 drives for indexes) and
configured the stripe widths correctly this time. The inode64 mount option
did result in sequential allocation, with a single 2GB segment per file, for
all files which were pre-allocated by the MongoDB software. A small number
of similar files I created manually using dd from /dev/zero with a single
2GB block showed 3 or 4 segments coming from different AG's, with
corresponding disparate block ranges, but there aren't enough of those to
cause problems. 
One thing that puzzles me is that despite my configuration of the underlying
RAID stripe geometry both at filesystem creation and mount time, all the
filesystems show average request sizes (mostly writes at this time) of
around 240 sectors. This is correct for some of them, but the RAID1 stripe
is twice that wide and the RAID5 almost 4 times as wide. The files being
written are all memory-mapped, so I'm wondering if that means the kernel
uses some other settings besides the fstab mount options to determine the
request size. The flush activity only happens a few times a minute and only
lasts a second or two, so I don't think there's really a significant
performance impact under the current load. And since the writes are actually
going to a 1GB controller cache, I suspect there is enough time for the
controller to assemble a full RAID5 stripe before writing to disk.
View this message in context: 
Sent from the Xfs - General mailing list archive at Nabble.com.

<Prev in Thread] Current Thread [Next in Thread>