xfs
[Top] [All Lists]

Re: ls -l versus du -sk after xfs_fsr

To: linux-xfs@xxxxxxxxxxx
Subject: Re: ls -l versus du -sk after xfs_fsr
From: Mathieu Betrancourt <mbetrancourt@xxxxxxxxx>
Date: Mon, 3 Oct 2005 21:44:00 +0200
Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:in-reply-to:mime-version:content-type:references; b=ShKdWSvY1Xr0LT3vSu9QkxvDxIJB5OQQwJnxgnLmxpMspgEEKnRx+Id8SNLn5+Tnyh5v1kfhd1GmQaTKu3/Tdy5OhY/bIC1RUECrXJbMUht71mcSRlFLuCQcS3a8WfIjSrxldNBf1jW1bnWnIfIjnSmXxyw12XBiAuPSN5SoquQ=
In-reply-to: <434174A7.6010904@sgi.com>
References: <20050926071451.GA3751@soptik.pzkagis.cz> <4338128F.8000707@sgi.com> <20050927163531.GA19652@soptik.pzkagis.cz> <433976C5.1000104@sgi.com> <20050929054410.GA30789@soptik.pzkagis.cz> <20051001091130.GA15808@soptik.pzkagis.cz> <434174A7.6010904@sgi.com>
Reply-to: Mathieu Betrancourt <mbetrancourt@xxxxxxxxx>
Sender: linux-xfs-bounce@xxxxxxxxxxx
hi,

I have the same problem on a xfs on a raid 1 device (mdraid), wich is almost
full (97%)
and not on other non raid devices (and unfortunately not almost full).

This problem appeared under Fedora3 and Suse9.3

Here's the output of xfs_info for the problematic one :
> xfs_info /dev/md0
meta-data=/data isize=256 agcount=16, agsize=10051648 blks
= sectsz=512
data = bsize=512 blocks=160826368, imaxpct=25
= sunit=64 swidth=128 blks, unwritten=1
naming =version 2 bsize=512
log =internal bsize=512 blocks=65536, version=1
= sectsz=512 sunit=0 blks
realtime =none extsz=65536 blocks=0, rtextents=0

And the others :
> xfs_info /dev/hda3
meta-data=/ isize=256 agcount=16, agsize=7484281 blks
= sectsz=512
data = bsize=512 blocks=119748496, imaxpct=25
= sunit=0 swidth=0 blks, unwritten=1
naming =version 2 bsize=4096
log =internal bsize=512 blocks=58470, version=1
= sectsz=512 sunit=0 blks
realtime =none extsz=65536 blocks=0, rtextents=0

> xfs_info /dev/hdd1
meta-data=/unimportant_data isize=256 agcount=16, agsize=4889780 blks
= sectsz=512
data = bsize=512 blocks=78236480, imaxpct=25
= sunit=0 swidth=0 blks, unwritten=1
naming =version 2 bsize=4096
log =internal bsize=512 blocks=38201, version=1
= sectsz=512 sunit=0 blks
realtime =none extsz=65536 blocks=0, rtextents=0

I've also tried with 4k blocks and it was the same.

Hope this helps,

Mathieu


[[HTML alternate version deleted]]


<Prev in Thread] Current Thread [Next in Thread>