[Top] [All Lists]

Re: Premature "No Space left on device" on XFS

To: Dave Chinner <david@xxxxxxxxxxxxx>
Subject: Re: Premature "No Space left on device" on XFS
From: Bernhard Schmidt <berni@xxxxxxxxxxxxx>
Date: Fri, 07 Oct 2011 15:49:57 +0200
Cc: xfs@xxxxxxxxxxx
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/simple; d=birkenwald.de; h= content-transfer-encoding:content-type:content-type:in-reply-to :references:subject:subject:mime-version:user-agent:from:from :date:date:message-id:received; s=mailout; t=1317995398; bh=Ws3Y 0qym7iaygsZOFLHY178U0lJjZsfEGf91zIIv5zA=; b=lc9qdNQKpRQYJ+SI4AK5 rF4J4cNUBTUmVx5A1oMCZ1sK9epN4ePjzF9ydP+86E6hkvyYuxYq4dmAmWBwUGYr JZDK5VhtvO+UWsYbCzBnYoIYYKvje1ZgtZU7xGNZm8WBIhKT6bjtXSTkGSc7Lf5W DvYsPYEDnxNIuu6aALjC2Mg=
In-reply-to: <20111007013711.GW3159@dastard>
References: <4E8E079B.4040103@xxxxxxxxxxxxx> <20111007013711.GW3159@dastard>
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:7.0) Gecko/20110923 Thunderbird/7.0
Am 07.10.2011 03:37, schrieb Dave Chinner:


>> this is an XFS-related summary of a problem report I sent to the
>> postfix mailinglist a few minutes ago after a bulkmail test system
>> blew up during a stress test.
>> We have a few MTAs running SLES11.1 amd64 (,
>> 10 GB XFS Spooldirectory with default blocksize (4k). It was
>> bombarded with mails faster than it could send them on, which
>> eventually led to almost 2 million files of ~1.5kB in one directory.
>> Suddenly, this started to happen
>> lxmhs45:/var/spool/postfix-bulk/postfix-bulkinhss # touch a
>> touch: cannot touch `a': No space left on device
>> lxmhs45:/var/spool/postfix-bulk/postfix-bulkinhss # df .
>> Filesystem           1K-blocks      Used Available Use% Mounted on
>> /dev/sdb              10475520   7471160   3004360  72%
> So you have a 10GB filesystem, with about 3GB of free space.
>> /var/spool/postfix-bulk
>> lxmhs45:/var/spool/postfix-bulk/postfix-bulkinhss # df -i .
>> Filesystem            Inodes   IUsed   IFree IUse% Mounted on
>> /dev/sdb             10485760 1742528 8743232   17% /var/spool/postfix-bulk
> And with 1.7 million inodes in it. That's a lot for a tiny
> filesystem, and not really a use case that XFS is well suited to.
> XFS will work, but it won't age gracefully under these conditions...
> As it is, your problem is most likely fragmented free space (an
> aging problem). Inodes are allocated in chunks of 64, so require an
> -aligned- contiguous 16k extent for the default 256 byte inode size.
> If you have no aligned contiguous 16k extents free then inode
> allocation will fail.
> Running 'xfs_db -r "-c freesp -s" /dev/sdb' will give you a
> histogram of free space extents in the filesystem, which will tell
> us if you are hitting this problem.

I managed to create the situation again. This time the total usage is a
bit higher, but it still failed.

lxmhs45:~ # df /var/spool/postfix-bulk
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/sdb              10475520   8071008   2404512  78%
lxmhs45:~ # df -i /var/spool/postfix-bulk
Filesystem            Inodes   IUsed   IFree IUse% Mounted on
/dev/sdb             11500544 1882496 9618048   17% /var/spool/postfix-bulk

This is the output requested.

lxmhs45:~ # xfs_db -r "-c freesp -s" /dev/sdb
   from      to extents  blocks    pct
      1       1   32230   32230   5.36
      2       3    6874   16476   2.74
      4       7  138151  552604  91.90
total free extents 177255
total free blocks 601310
average free extent size 3.39234
lxmhs45:~ # xfs_info /dev/sdb
meta-data=/dev/sdb               isize=256    agcount=4, agsize=655360 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=2621440, imaxpct=50
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

Best Regards,

<Prev in Thread] Current Thread [Next in Thread>