On Tue, 2003-07-29 at 13:49, Utz Lehmann wrote:
> Hi
>
> I have a litte problem with a xfs filesystem while stress testing a new
> server (not in production yet).
> When i try to create a new file i get no space left on device but have 13GB
> available.
>
>
> The filesystem is on an ide to ide hardware raid5 (3*250GB) with only a big
> hde3 partition used as an lvm volume group (vg01). vg01 is completly filled
> with /dev/vg01/raid.
>
> The filesystem is filled up by coping files into it (local and via nfs
> export) and running xfs_fsr parallel a few times.
>
> Tested kernels are kernel-smp-2.4.20-18.9XFS1.3.0pre4.i686.rpm,
> kernel-smp-2.4.20-18.9XFS1.3.0pre2.i686.rpm from oss.sgi.com and
> kernel-smp-2.4.20-18.7SGI_XFS_1.2.0_teco1.i686.rpm (source rpm from Seth Mos
> + task_unmapped_base.patch and different config).
>
> xfs_check and xfs_repair -n reports no errors.
>
>
> The filesystem is made with mkfs.xfs -f -d sunit=4,swidth=8 /dev/vg01/raid.
>
> # xfs_info /mnt/raid/
> meta-data=/mnt/raid isize=256 agcount=117, agsize=1048572
> blks
> = sectsz=512
> data = bsize=4096 blocks=122535936, imaxpct=25
> = sunit=4 swidth=8 blks, unwritten=0
> naming =version 2 bsize=4096
> log =internal bsize=4096 blocks=14958, version=1
> = sectsz=512 sunit=0 blks
> realtime =none extsz=65536 blocks=0, rtextents=0
>
> It's mounted with logbufs=8,logbsize=32768 but with default mount options i
> get the same error.
>
>
>
> # touch /mnt/raid/test
> touch: creating /mnt/raid/test': No space left on device
>
> # df /mnt/raid/
> Filesystem 1K-blocks Used Available Use% Mounted on
> /dev/vg01/raid 490083912 476491096 13592816 98% /mnt/raid
>
> # df -i /mnt/raid/
> Filesystem Inodes IUsed IFree IUse% Mounted on
> /dev/vg01/raid 57039104 2667840 54371264 5% /mnt/raid
>
> # xfs_db /dev/vg01/raid
> xfs_db> freesp -s
> from to extents blocks pct
> 1 1 179733 179733 5.30
> 2 3 1172308 3211512 94.70
> total free extents 1352041
> total free blocks 3391245
> average free extent size 2.50824
>
> This means that i have only 1-3 block sized free extents, right?
> Maybe this caused the error because it's smaller that the sunit/swidth?
> (wild guess).
>
> It has something todo with creating new files. When i delete one file i can
> create exacly one file and fill up the whole filesystem:
>
> # rm /mnt/raid/raid7/cust/tecosim/cfd/utils/bin/xemacs_hp.gz
> # >/mnt/raid/test
> # >/mnt/raid/test2
> -bash: test2: No space left on device
> # dd if=/dev/zero of=/mnt/raid/test bs=128k
> # df /mnt/raid/
> Filesystem 1K-blocks Used Available Use% Mounted on
> /dev/vg01/raid 490083912 490083860 52 100% /mnt/raid
>
>
I think you got bitten by stripe alignment. Inode clusters are allocated
on stripe boundaries. You probably have no boundaries left free, so
it cannot allocate any inode space.
If you go into xfs_db again, and run freesp, then run frag, what does
it say? We probably need to revisit how files are getting allocated
for NFS, I think it is not doing a very good job in this case. What
sort of file size are you talking about here, the numbers say about 175K
but I want to check.
In this setup fsr will not do anything, and it can sometimes have the
effect of defragmenting files, but fragmenting the remaining free space.
Steve
--
Steve Lord voice: +1-651-683-3511
Principal Engineer, Filesystem Software email: lord@xxxxxxx
|