On Thu, Jan 27, 2011 at 07:05:33PM -0700, Jef Fox wrote:
> We are having some problems with preallocation of large files. We have
> found that we can preallocate about 500 1GB files on a volume using the
> resvsp and truncate commands, but the extents are still showing up as
> preallocated. Is this a problem? The OS appears to think the files are
> allocated and correctly sized.
That's the way it's supposed to work. Preallocated space stays
preallocated (i.e reads as zeros) until it is written to, regardless
of whether you change the file size via truncate commands.
> For reference, we are trying to create files for an external piece of
> equipment to write to a SSD with. The SSD would then be mounted in RHEL
> and the data pulled off in the 1G chunks. Because of the nature of the
> data, we need to constantly erase and recreate the files and
> preallocation seems to be the fastest option.
What do you mean by "erase and recreate"? Do you mean you rm the
files, then preallocate them again?
If you were running 2.6.37+ and a TOT xfsprogs, there's also the
"zero" command that converts allocated space back to the
preallocated (zeroed) state without doing any IO. It's the
equivalent unresvsp + resvsp in a single operation.
> We don't really care if
> the data gets 0'ed out. Is there another method - allocsp takes too
> long for this application?
allocsp is historical interface, pretty much useless and should
probably be removed. I can't think of any situation where allocsp
would be better than resvsp or zero....
> Or, does it matter if XFS thinks the extents
> are preallocated but unwritten if no other files are written to the
I'm not sure what you are asking there...