xfs
[Top] [All Lists]

Re: Growing block size??

To: fermin@xxxxxxxxxx (Fermin Molina)
Subject: Re: Growing block size??
From: Steve Lord <lord@xxxxxxx>
Date: Tue, 22 May 2001 07:41:23 -0500
Cc: linux-xfs@xxxxxxxxxxx
In-reply-to: Message from fermin@xxxxxxxxxx (Fermin Molina) of "Tue, 22 May 2001 14:07:35 +0200." <200105221207.OAA08534@xxxxxxxxxx>
Sender: owner-linux-xfs@xxxxxxxxxxx
This sounds like the nfs reference cache component of XFS. When the NFS
server operates on a file, it will do the equivalent of an open/close
around each write. When XFS allocates space for a file, it preallocates
out beyond the size of the write - in anticipation of another write
coming in (that's not quite a correct description since it does not
happen on all writes). This is usually done in 64K chunks, but there
may be code which bumps this when a certain file size is reached.

This extra space is removed at close time (it is usually delalloc space
which is much cheaper to manipulate like this).

The reference cache was brought over from Irix to fix NFS write performance,
which was doing the allocate and remove extra space on each write call, it
basically postpones the release of the extra space until an inode is pushed
out of the cache by:

        o new inodes coming in
        o sync activity (very slowly)
        o unmount
        o file removal

So what is happening here is the 'temporary' extra space on the file is
sitting around on all the inodes in the nfs reference cache causing the
quota overflow. The reference cache is 512 inodes, which may be somewhat
large for a linux box, I can make this a tunable parameter.

I suspect Irix could exhibit the same behavior here, although it has
even more special case code for NFS.

You can make the effect smaller by editing fs/xfs/xfs_vfsops.c and looking
for this line:

        xfs_refcache_size = 512;

Make that a smaller number, it will have the most effect if you keep it
higher than the number of files clients are writing to in parallel.

Steve

> Hi,
> 
> I've been experimenting a strange behaviour of XFS+NFS in my system. I
> don't know if this is normal.
> 
> Server machine: NFS with homes shared (/users). Filesystem is XFS.
> Quota is assigned for each user.
> 
> Client machine: mount that shared directory (/users). Also has NIS for
> UID,GID mapping.
> 
> On the client machine and as a normal user, I gunzip+untar a .tgz with
> many subdirectories and small files (as a kernel .tgz).
> 
> The quota grows very fast when I'm "tar zxvf" the file.
> Then, on the server I get:
> 
> # ls -l testsuite.in
> 
> -rw-r--r--    1  user1        users   182853  Feb 25 19:06  testsuite.in
> 
> # du -k testsuite.in
> 
> 256   testsuite.in
> 
> 
> It's like block size were increased to 128 Kb. In some files, for a file
> of 10 Kb, du -k reports 64. Instead, in other files, I can deduce they
> only use 4 Kb (the normal block size for a XFS filesystem, I think).
> 
> I know that command "du -k" sometimes reports erroneous information, but
> the problem is that this is reflected in the user quota, and the user
> runs out of quota...
> 
> I've been using a XFS kernel 2.4.4, cvs from early May.
> 
> Is this behaviour normal?
> 
> Thanx.
> 
> /Fermin



<Prev in Thread] Current Thread [Next in Thread>