xfs
[Top] [All Lists]

Re: extremely slow file creation/deletion after xfs ran full

To: Brian Foster <bfoster@xxxxxxxxxx>
Subject: Re: extremely slow file creation/deletion after xfs ran full
From: Carsten Aulbert <Carsten.Aulbert@xxxxxxxxxx>
Date: Mon, 12 Jan 2015 18:33:06 +0100
Cc: xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <20150112163749.GE25944@xxxxxxxxxxxxxxx>
Organization: Max Planck Institute for Gravitational Physics - Albert Einstein Institute (AEI)
References: <54B387A1.6000807@xxxxxxxxxx> <54B3CC6A.4080405@xxxxxxxxxx> <20150112155206.GD25944@xxxxxxxxxxxxxxx> <54B3F19D.6030307@xxxxxxxxxx> <20150112163749.GE25944@xxxxxxxxxxxxxxx>
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Icedove/31.3.0
Hi

On 01/12/2015 05:37 PM, Brian Foster wrote:
> No, but it does show that there are a bunch of free inodes scattered
> throughout the existing records in most of the AGs. The finobt should
> definitely help avoid the allocation latency when this occurs.
> 

That is good to know/hope :)

> It is interesting that you have so many more free inodes in ag 0 (~53m
> as opposed to several hundreds/thousands in others). What does 'p count'
> show for each ag? Was this fs grown to the current size over time?
> 
"p count" seems to "thin out" over ag:

count = 513057792
count = 16596224
count = 15387584
count = 14958528
count = 4096960
count = 4340416
count = 4987968
count = 3321792
count = 5041856
count = 5485376
count = 5233088
count = 5810432
count = 5271552
count = 5464000
count = 365440

(if the full print output is interesting, it's attached).

The FS was never grown, the machine was installed on November 5th, 2012
and was (ab)using the FS ever since. On average there have been about
1-2 million file creations per day (ranging from a few kByte files to a
few 100 kBytes) and also an equally large number of deletions (after
some time). Thus overall, a somewhat busy server.

Cheers

Carsten

Attachment: ag-print.log
Description: Text document

<Prev in Thread] Current Thread [Next in Thread>