xfs
[Top] [All Lists]

Re: Strange fragmentation in nearly empty filesystem

To: Carsten Oberscheid <oberscheid@xxxxxxxxxxxx>
Subject: Re: Strange fragmentation in nearly empty filesystem
From: Eric Sandeen <sandeen@xxxxxxxxxxx>
Date: Tue, 27 Jan 2009 07:30:32 -0600
Cc: xfs@xxxxxxxxxxx
In-reply-to: <20090127084034.GA16931@xxxxxxxxxxxx>
References: <20090123102130.GB8012@xxxxxxxxxxxx> <20090124003329.GE32390@disturbed> <20090126075724.GA1753@xxxxxxxxxxxx> <497E02CD.2020000@xxxxxxxxxxx> <20090127071023.GA16511@xxxxxxxxxxxx> <20090127084034.GA16931@xxxxxxxxxxxx>
User-agent: Thunderbird 2.0.0.19 (Macintosh/20081209)
Carsten Oberscheid wrote:
> On Tue, Jan 27, 2009 at 08:10:23AM +0100, Carsten Oberscheid wrote:
>> I'll see what tests I can do and report back about the findings.
> 
> Just booted an Ubuntu live CD from October 2007 and mounted the
> filesystem in question. Could not run vmware from there easily, so I
> tried just a copy of the vmem file:

It'd be best to run vmware under some other kernel, and observe its
behavior, not just mount some existing filesystem and look at existing
files and do other non-vmware-related tests.

> 
> root@ubuntu# uname -a
> Linux tangchai 2.6.27-7-generic #1 SMP Tue Nov 4 19:33:06 UTC 2008 x86_64 
> GNU/Linux
> 
> root@ubuntu# xfs_bmap -vvp foo.vmem | grep hole | wc -l
> 34
> root@ubuntu# xfs_bmap -vvp foo.vmem | grep -v hole | wc -l
> 38
> 
> root@ubuntu# cp foo.vmem test
> 
> root@ubuntu# xfs_bmap -vvp test | grep hole | wc -l
> 27078
> root@ubuntu# xfs_bmap -vvp test | grep -v hole | wc -l
> 27081

You went from a file with 34 holes to one with 27k holes by copying it?
 Perhaps this is cp's sparse file detection in action, seeking over
swaths of zeros.
> 
> So a simple copy of a hardly fragmented vmem file gets very badly
> fragmented. If we assume the vmem file fragmentation to be caused by
> vmware writing this file inefficiently, does this mean that cp is even
> worse?

Perhaps, if by "worse" you mean "leaves holes for regions with zeros".
Try cp --sparse=never and see how that goes.

> For comparison, I created a new clean dummy file:
> 
> 
> root@ubuntu# dd if=/dev/zero of=ztest bs=1000 count=500000
> 500000+0 records in
> 500000+0 records out
> 500000000 bytes (500 MB) copied, 6.52903 seconds, 76.6 MB/s
> 
> root@ubuntu# xfs_bmap -vvp ztest | grep hole | wc -l 
> 0

of course, I'd hope you have no holes here ;)

> root@ubuntu# xfs_bmap -vvp ztest | grep -v hole | wc -l 
> 14
> 
> root@ubuntu# cp ztest ztest2
> 
> root@ubuntu# xfs_bmap -vvp ztest2 | grep hole | wc -l 
> 0
> 
> root@ubuntu# xfs_bmap -vvp ztest2 | grep -v hole | wc -l 
> 3
> 
> 
> No problem here. I repeated all this after rebooting my current
> kernel, with the same results. Copying the vmem file to an etx3
> filesystem gives about 1,700 extents, which is also bad, but not as
> bad as on the XFS disk.
> 
> While this test says nothing about the interaction of old/new kernel
> and old/new VMware, for me it raises some questions about
> file-specific properties affecting fragmentation which appear to be
> independent of recent kernel changes. Please bear with me if I miss
> something obvious, I'm just a user.

My best guess is that your cp test is making the file even more sparse
by detecting blocks full of zeros and seeking over them, leaving more
holes.  Not really related to vmware behavior, though.

-Eric

> Regards
> 
> 
> Carsten Oberscheid

<Prev in Thread] Current Thread [Next in Thread>