xfs
[Top] [All Lists]

Re: XFS-filesystem corrupted by defragmentation

To: Bernhard Gschaider <bgschaid_lists@xxxxxxxxx>
Subject: Re: XFS-filesystem corrupted by defragmentation
From: Eric Sandeen <sandeen@xxxxxxxxxxx>
Date: Tue, 13 Apr 2010 11:36:36 -0500
Cc: xfs@xxxxxxxxxxx
In-reply-to: <87r5mjpn8l.fsf@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
References: <87r5mjpn8l.fsf@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.8) Gecko/20100301 Fedora/3.0.3-1.fc11 Lightning/1.0b2pre Thunderbird/3.0.3
On 04/13/2010 07:10 AM, Bernhard Gschaider wrote:
> 
> Hi!
> 
> I'm asking here because I've been referred here fro the CentOS-mailing
> list (for the full story see
> http://www.pubbs.net/201004/centos/17112-centos-performance-problems-with-xfs-on-centos-54.html
> and 
> http://www.pubbs.net/201004/centos/24542-centos-xfs-filesystem-corrupted-by-defragmentation-was-performance-problems-with-xfs-on-centos-54.html
> the following stuff is a summary of this)
> 
> It was suggested to me that the source of my performance problems might
> be the fragmentation of the XFS-system. I tested for fragmentation and
> got 
> 
> xfs_db> frag
> actual 6349355, ideal 4865683, fragmentation factor 23.37%

so on average your filesystem has 6349355/4865683 = 1.3 extents per file.

Just as a casual side note, this is not even remotely bad, at least
on average.

> Before I'd try to defragment my whole filesystem I figured "Let's try
> it on some file". 
> 
> So I did
> 
>> xfs_bmap /raid/Temp/someDiskimage.iso
> [output shows 101 extents and 1 hole]
> 
> Then I defragmented the file
>> xfs_fsr /raid/Temp/someDiskimage.iso
> extents before:101 after:3 DONE
> 
>> xfs_bmap /raid/Temp/someDiskimage.iso
> [output shows 3 extents and 1 hole]
> 
> and now comes the bummer: i wanted to check the fragmentation of the
> whole filesystem (just for checking):
> 
>> xfs_db -r /dev/mapper/VolGroup00-LogVol04
> xfs_db: unexpected XFS SB magic number 0x00000000
> xfs_db: read failed: Invalid argument
> xfs_db: data size check failed
> cache_node_purge: refcount was 1, not zero (node=0x2a25c20)
> xfs_db: cannot read root inode (22)

So here you did:

# xfs_db -r /dev/mapper/VolGroup00-LogVol04

but below you show:

% xfs_info  /raid
> meta-data=/dev/VolGroup00/LogVol05

.... wrong device maybe?

-Eric

<Prev in Thread] Current Thread [Next in Thread>