Carsten Oberscheid wrote:
> On Tue, Jan 27, 2009 at 07:30:32AM -0600, Eric Sandeen wrote:
>> It'd be best to run vmware under some other kernel, and observe its
>> behavior, not just mount some existing filesystem and look at existing
>> files and do other non-vmware-related tests.
> If this really is just a vmware and/or kernel problem that has nothing
> to do with the filesystem, then I agree.
well when I say "kernel" I include the filesystem in that kernel. :)
>> You went from a file with 34 holes to one with 27k holes by copying it?
>> Perhaps this is cp's sparse file detection in action, seeking over
>> swaths of zeros.
>> Perhaps, if by "worse" you mean "leaves holes for regions with zeros".
>> Try cp --sparse=never and see how that goes.
> Didn't know this one.
> [co@tangchai]~/vmware/foo cp --sparse=never foo.vmem test_nosparse
> [co@tangchai]~/vmware/foo xfs_bmap -vvp test_ | grep hole | wc -l
> test_livecd test_nosparse
> [co@tangchai]~/vmware/foo xfs_bmap -vvp test_nosparse | grep hole | wc -l
> [co@tangchai]~/vmware/foo xfs_bmap -vvp test_nosparse | grep -v hole | wc -l
> You win.
>> My best guess is that your cp test is making the file even more sparse
>> by detecting blocks full of zeros and seeking over them, leaving more
>> holes. Not really related to vmware behavior, though.
> All right. So next I'll try and downgrade vmplayer.
> Just out of couriosity (and stubbornness): Are there any XFS
> parameters that might influence fragmentation for the better, in case
> I have to put up with a stupid application?
> Thanks for your time & thoughts & best regards
There is an -o allocsize=<number> which controls how much is
speculatively allocated off the end of a file; in some cases it could
help but I'm not sure it would in this case. As Dave said a while ago,
it's really an issue with how vmware is writing the files out.