xfs
[Top] [All Lists]

Re: [PATCH] Increase the default size of the reserved blocks pool

To: Lachlan McIlroy <lachlan@xxxxxxx>
Subject: Re: [PATCH] Increase the default size of the reserved blocks pool
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Tue, 30 Sep 2008 16:37:58 +1000
Cc: Mark Goodwin <markgw@xxxxxxx>, xfs-dev <xfs-dev@xxxxxxx>, xfs-oss <xfs@xxxxxxxxxxx>
In-reply-to: <48E1C24F.3080209@xxxxxxx>
Mail-followup-to: Lachlan McIlroy <lachlan@xxxxxxx>, Mark Goodwin <markgw@xxxxxxx>, xfs-dev <xfs-dev@xxxxxxx>, xfs-oss <xfs@xxxxxxxxxxx>
References: <48E097B5.3010906@xxxxxxx> <48E19C59.7090303@xxxxxxx> <20080930042526.GB23915@disturbed> <48E1C24F.3080209@xxxxxxx>
User-agent: Mutt/1.5.18 (2008-05-17)
On Tue, Sep 30, 2008 at 04:08:15PM +1000, Lachlan McIlroy wrote:
> Dave Chinner wrote:
>> On Tue, Sep 30, 2008 at 01:26:17PM +1000, Mark Goodwin wrote:
>>>
>>> Lachlan McIlroy wrote:
>>>> The current default size of the reserved blocks pool is easy to deplete
>>>> with certain workloads, in particular workloads that do lots of concurrent
>>>> delayed allocation extent conversions.  If enough transactions are running
>>>> in parallel and the entire pool is consumed then subsequent calls to
>>>> xfs_trans_reserve() will fail with ENOSPC.  Also add a rate limited
>>>> warning so we know if this starts happening again.
>>>>
>>> Should we also change the semantics of the XFS_SET_RESBLKS ioctl
>>> so that the passed in value is the minimum required by the caller,
>>> i.e. silently succeed if the current value is more than that?
>>
>> No. If we are asked to reduce the size of the pool, then we should
>> do so. The caller might have reason for wanting the pool size
>> reduced. e.g. using it to trigger early ENOSPC notification so that
>> there is always room to write critical application data when the
>> filesystem fills up....
>>
>
> We tossed around the idea of preventing applications from reducing the
> size of the reserved pool so that they could not weaken the integrity
> of the filesystem by removing critical resources.  We need to support
> reducing the pool size because we do so on unmount.

Some people so tightly control their use of disk space that even the
default needs to be reduced. We recently had someone come across this
very problem when upgrading from 2.6.18 to 2.6.25 - their app
preallocated almost he entire filesystem and so when the reserve
pool took it's blocks, the filesystem was permanently at ENOSPC.
The only way to fix this was to reduce the pool size and it was
obvious that in this configuration the reserve pool was superfluous
because it was a static layout.

So at one end of the scale we've got the problem of some workloads
when run at ENOSPC will exhaust the default pool size. At the other
end we've got some workloads where the default pool size is too
large. And we've got the vast middle ground where there are no
problems with the current pool size but may have issues with a
significant increase in pool size.

It's this vast middle ground where we'll get all the "I upgraded and
now I can't use my XFS filesystem" reports from. Let's not make more
trouble for ourselves than is necesary.  Hence it seems to me that
the default should not be changed, the various mitigation strategies
we talked about should be implemented, and SGI should tune the
reserve pool to suit their users in the Propack distro (like so many
other tunables are modified)....

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>