Strange fragmentation in nearly empty filesystem

Eric Sandeen sandeen at sandeen.net
Tue Jan 27 07:30:32 CST 2009


Carsten Oberscheid wrote:
> On Tue, Jan 27, 2009 at 08:10:23AM +0100, Carsten Oberscheid wrote:
>> I'll see what tests I can do and report back about the findings.
> 
> Just booted an Ubuntu live CD from October 2007 and mounted the
> filesystem in question. Could not run vmware from there easily, so I
> tried just a copy of the vmem file:

It'd be best to run vmware under some other kernel, and observe its
behavior, not just mount some existing filesystem and look at existing
files and do other non-vmware-related tests.

> 
> root at ubuntu# uname -a
> Linux tangchai 2.6.27-7-generic #1 SMP Tue Nov 4 19:33:06 UTC 2008 x86_64 GNU/Linux
> 
> root at ubuntu# xfs_bmap -vvp foo.vmem | grep hole | wc -l
> 34
> root at ubuntu# xfs_bmap -vvp foo.vmem | grep -v hole | wc -l
> 38
> 
> root at ubuntu# cp foo.vmem test
> 
> root at ubuntu# xfs_bmap -vvp test | grep hole | wc -l
> 27078
> root at ubuntu# xfs_bmap -vvp test | grep -v hole | wc -l
> 27081

You went from a file with 34 holes to one with 27k holes by copying it?
 Perhaps this is cp's sparse file detection in action, seeking over
swaths of zeros.
> 
> So a simple copy of a hardly fragmented vmem file gets very badly
> fragmented. If we assume the vmem file fragmentation to be caused by
> vmware writing this file inefficiently, does this mean that cp is even
> worse?

Perhaps, if by "worse" you mean "leaves holes for regions with zeros".
Try cp --sparse=never and see how that goes.

> For comparison, I created a new clean dummy file:
> 
> 
> root at ubuntu# dd if=/dev/zero of=ztest bs=1000 count=500000
> 500000+0 records in
> 500000+0 records out
> 500000000 bytes (500 MB) copied, 6.52903 seconds, 76.6 MB/s
> 
> root at ubuntu# xfs_bmap -vvp ztest | grep hole | wc -l 
> 0

of course, I'd hope you have no holes here ;)

> root at ubuntu# xfs_bmap -vvp ztest | grep -v hole | wc -l 
> 14
> 
> root at ubuntu# cp ztest ztest2
> 
> root at ubuntu# xfs_bmap -vvp ztest2 | grep hole | wc -l 
> 0
> 
> root at ubuntu# xfs_bmap -vvp ztest2 | grep -v hole | wc -l 
> 3
> 
> 
> No problem here. I repeated all this after rebooting my current
> kernel, with the same results. Copying the vmem file to an etx3
> filesystem gives about 1,700 extents, which is also bad, but not as
> bad as on the XFS disk.
> 
> While this test says nothing about the interaction of old/new kernel
> and old/new VMware, for me it raises some questions about
> file-specific properties affecting fragmentation which appear to be
> independent of recent kernel changes. Please bear with me if I miss
> something obvious, I'm just a user.

My best guess is that your cp test is making the file even more sparse
by detecting blocks full of zeros and seeking over them, leaving more
holes.  Not really related to vmware behavior, though.

-Eric

> Regards
> 
> 
> Carsten Oberscheid




More information about the xfs mailing list