xfs
[Top] [All Lists]

Re: xfssyncd and disk spin down

To: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
Subject: Re: xfssyncd and disk spin down
From: Eric Sandeen <sandeen@xxxxxxxxxxx>
Date: Fri, 24 Dec 2010 21:36:49 -0600
Cc: xfs@xxxxxxxxxxx
In-reply-to: <4D1525FB.1010605@xxxxxxxxxxxxxxxxx>
References: <20101223165532.GA23813@xxxxxxxxxxxxxxxx> <4D13A30A.3090600@xxxxxxxxxxxxxxxxx> <20101223211650.GA19694@xxxxxxxxxxxxxxxx> <4D13EF3B.2050401@xxxxxxxxxxxxxxxxx> <20101224060246.GA2308@xxxxxxxxxxxxxxxx> <4D1525FB.1010605@xxxxxxxxxxxxxxxxx>
User-agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.13) Gecko/20101207 Thunderbird/3.1.7
On 12/24/10 5:00 PM, Stan Hoeppner wrote:
> Petre Rodan put forth on 12/24/2010 12:02 AM:
> 
>>> fs.xfs.xfssyncd_centisecs   (Min: 100  Default: 3000  Max: 720000)
>>> fs.xfs.age_buffer_centisecs (Min: 100  Default: 1500  Max: 720000)
>>
>> just increasing the delay until an inevitable and seemingly redundant disk 
>> write is not what I want.
>> I was searching for an option to make internal xfs processes not touch the 
>> drive after the buffers/log/dirty metadata have been flushed (once).
> 
> I'm not a dev Petre but just another XFS user.  This is the best
> "solution" I could come up with for your issue.  I assumed this
> "unnecessary" regularly scheduled activity was a house cleaning measure
> and done intentionally; didn't dawn on me that it may be a bug.
> 
> Sorry I wasn't able to fully address your issue.  If/until there is a
> permanent fix for this you may want to bump this to 720000 anyway as an
> interim measure, if you haven't already, as it should yield a
> significantly better situation than what you have now.  You'll at least
> get something like ~1400 minutes of sleep per day instead of none,
> decreasing your load/unload cycles from ~2880/day to ~120/day, if my
> math is correct.

Of course, then xfssyncd will not be doing its proper duty regularly ;)

We just need to see why it's always finding work to do when idle.

-Eric

<Prev in Thread] Current Thread [Next in Thread>