[Top] [All Lists]

xfs user quota differs from filesystem data

To: xfs@xxxxxxxxxxx
Subject: xfs user quota differs from filesystem data
From: Marc Mertes <mertes@xxxxxxxxxxx>
Date: Fri, 29 Jun 2012 13:07:45 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:13.0) Gecko/20120601 Thunderbird/13.0
Hi everybody,
I run into a problem and I don´t know how to solve that.

First my system infos:

xfs_repair version 2.9.8
Kernel: 2.6.26-2-amd64 on debian lenny (5.0.9)
CPU: 2x Quad-Core AMD Opteron(tm) Processor 2378
Volume /dev/drbd0 /data xfs rw,noatime,attr2,nobarrier,usrquota 0 0
DRBD Version 8.3.7 (api:88/proto:86-91)
Hardware RAID 5 with 6 SAS2 Seagate Cheetah (450GB) disks (5+1xHS) on LSI 9260-8i SAS Controller

xfs_info /dev/drbd0
meta-data=/dev/drbd0 isize=256 agcount=4, agsize=109744873 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=438979490, imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096
log      =internal               bsize=4096   blocks=32768, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=0
realtime =none                   extsz=4096   blocks=0, rtextents=0

Now my problem:
I defined a userquota (for each, it´s our login server) with a soft-/hardlimit of 10/12GB Now I have a few users where the listet used quota is different from the real amount of data in ther folders.

Example1: xfs_quota -x -c "quota -uh bthoma" /data
Disk quotas for User bthoma (343)
Filesystem   Blocks  Quota  Limit Warn/Time    Mounted on
/dev/drbd0    10,1G    12G    15G  00 [------] /data

du -csh /data/user/bthoma
8,7G    /data/user/bthoma
8,7G    insgesamt

xfs_quota -x -c "quota -uh lindau" /data
Disk quotas for User lindau (320)
Filesystem   Blocks  Quota  Limit Warn/Time    Mounted on
/dev/drbd0    13,5G    20G    22G  00 [------] /data

du -csh /data/user/lindau
17G     /data/user/lindau
17G     insgesamt

I have no clue how to "refresh" the quota database,
the quick n dirty solution was to set a higher quota for the affected users, that they are able to continue with working.
Some reached their quota limit (like in Example one)

at least:
xfs_db frag -v:
actual 3996075, ideal 3118453, fragmentation factor 21.96%

Any Ideas?
Best regards


<Prev in Thread] Current Thread [Next in Thread>
  • xfs user quota differs from filesystem data, Marc Mertes <=