xfs
[Top] [All Lists]

Cant create new files: No space left on device

To: linux-xfs@xxxxxxxxxxx
Subject: Cant create new files: No space left on device
From: Utz Lehmann <u.lehmann@xxxxxxxxxxxxxx>
Date: Tue, 29 Jul 2003 20:49:41 +0200
Sender: linux-xfs-bounce@xxxxxxxxxxx
User-agent: Mutt/1.2.5.1i
Hi

I have a litte problem with a xfs filesystem while stress testing a new
server (not in production yet).
When i try to create a new file i get no space left on device but have 13GB
available.


The filesystem is on an ide to ide hardware raid5 (3*250GB) with only a big
hde3 partition used as an lvm volume group (vg01). vg01 is completly filled
with /dev/vg01/raid.

The filesystem is filled up by coping files into it (local and via nfs
export) and running xfs_fsr parallel a few times.

Tested kernels are kernel-smp-2.4.20-18.9XFS1.3.0pre4.i686.rpm,
kernel-smp-2.4.20-18.9XFS1.3.0pre2.i686.rpm from oss.sgi.com and
kernel-smp-2.4.20-18.7SGI_XFS_1.2.0_teco1.i686.rpm (source rpm from Seth Mos
+ task_unmapped_base.patch and different config).

xfs_check and xfs_repair -n reports no errors.


The filesystem is made with mkfs.xfs -f -d sunit=4,swidth=8 /dev/vg01/raid.

# xfs_info /mnt/raid/
meta-data=/mnt/raid              isize=256    agcount=117, agsize=1048572
blks
         =                       sectsz=512  
data     =                       bsize=4096   blocks=122535936, imaxpct=25
         =                       sunit=4      swidth=8 blks, unwritten=0
naming   =version 2              bsize=4096  
log      =internal               bsize=4096   blocks=14958, version=1
         =                       sectsz=512   sunit=0 blks
realtime =none                   extsz=65536  blocks=0, rtextents=0

It's mounted with logbufs=8,logbsize=32768 but with default mount options i
get the same error.



# touch /mnt/raid/test
touch: creating /mnt/raid/test': No space left on device

# df /mnt/raid/                                 
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/vg01/raid       490083912 476491096  13592816  98% /mnt/raid

# df -i /mnt/raid/                          
Filesystem            Inodes   IUsed   IFree IUse% Mounted on
/dev/vg01/raid       57039104 2667840 54371264    5% /mnt/raid

# xfs_db /dev/vg01/raid 
xfs_db> freesp -s
   from      to extents  blocks    pct
      1       1  179733  179733   5.30
      2       3 1172308 3211512  94.70
total free extents 1352041
total free blocks 3391245
average free extent size 2.50824

This means that i have only 1-3 block sized free extents, right?
Maybe this caused the error because it's smaller that the sunit/swidth?
(wild guess).

It has something todo with creating new files. When i delete one file i can
create exacly one file and fill up the whole filesystem:

# rm /mnt/raid/raid7/cust/tecosim/cfd/utils/bin/xemacs_hp.gz
# >/mnt/raid/test
# >/mnt/raid/test2
-bash: test2: No space left on device
# dd if=/dev/zero of=/mnt/raid/test bs=128k
# df /mnt/raid/
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/vg01/raid       490083912 490083860        52 100% /mnt/raid


thanks
utz


<Prev in Thread] Current Thread [Next in Thread>