xfssyncd and disk spin down
Dave Chinner
david at fromorbit.com
Thu Jan 20 05:06:05 CST 2011
On Thu, Jan 20, 2011 at 12:01:43PM +0200, Petre Rodan wrote:
> On Fri, Dec 31, 2010 at 11:13:23AM +1100, Dave Chinner wrote:
> > On Mon, Dec 27, 2010 at 07:19:39PM +0200, Petre Rodan wrote:
> > >
> > > Hello Dave,
> > >
> > > On Tue, Dec 28, 2010 at 01:07:50AM +1100, Dave Chinner wrote:
> > > > Turn on the XFS tracing so we can see what is being written every
> > > > 36s. When the problem shows up:
> > > >
> > > > # echo 1 > /sys/kernel/debug/tracing/events/xfs/enable
> > > > # sleep 100
> > > > # cat /sys/kernel/debug/tracing/trace > trace.out
> > > > # echo 0 > /sys/kernel/debug/tracing/events/xfs/enable
> > > >
> > > > And post the trace.out file for us to look at.
> > >
> > > attached.
> > >
> > > you can disregard all the lvm partitions ('dev 254:.*') since they are on a different drive, probably only 8:17 is of interest.
> >
> > Ok, I can see the problem. The original patch I tested:
> >
> > http://oss.sgi.com/archives/xfs/2010-08/msg00026.html
> >
> > Made the log covering dummy transaction a synchronous transaction so
> > that the log was written and the superblock unpinned immediately to
> > allow the xfsbufd to write back the superblock and empty the AIL
> > before the next log covering check.
> >
> > On review, the log covering dummy transaction got changed to an
> > async transaction, so the superblock buffer is not unpinned
> > immediately. This was the patch committed:
> >
> > http://oss.sgi.com/archives/xfs/2010-08/msg00197.html
> >
> > As a result, the success of log covering and idling is then
> > dependent on whether the log gets written to disk to unpin the
> > superblock buffer before the next xfssyncd run. It seems that there
> > is a large chance that this log write does not happen, so the
> > filesystem never idles correctly. I've reproduced it here, and only
> > in one test out of ten did the filesystem enter an idle state
> > correctly. I guess I was unlucky enough to hit that 1-in-10 case
> > when I tested the modified patch.
> >
> > I'll cook up a patch to make the log covering behave like the
> > original patch I sent...
>
> I presume that the new fix should be provided by "xfs: ensure log
> covering transactions are synchronous", so I tested 2.6.37 patched
> with it and then 2.6.38_rc1 that has it included..
>
> instead of having xfssyncd write to the drive every 36s, we now have this:
....
> in other words xfsyncd and xfsbufd now alternate at 18s intervals
> keeping the drive busy with nothing constructive hours after the
> last write to the drive.
>
> to add to the misfortune, 'mount -o remount ' is no longer able to
> bring the drive to a quiet state since 2.6.37, so now the only way
> to achieve an idle drive is to fully umount and then remount the
> partition.
>
> just for the record, this is a different drive then at the
> beginning of the thread, and it has these parameters:
>
> meta-data=/dev/sdc1 isize=256 agcount=4, agsize=61047552 blks
> = sectsz=512 attr=2
> data = bsize=4096 blocks=244190208, imaxpct=25
> = sunit=0 swidth=0 blks
> naming =version 2 bsize=4096 ascii-ci=0
> log =internal bsize=4096 blocks=119233, version=2
> = sectsz=512 sunit=0 blks, lazy-count=0
^^^^^^^^^^^^
> realtime =none extsz=4096 blocks=0, rtextents=0
>
> attached you'll find the trace (with accesses to other drives filtered out).
It's something to do with lazy-count=0. I'm look into it when I get
the chance - I almost never test w/ lazy-count=0 because =1 is
the default value.
I'd recommend that you convert the fs to lazy-count=1 when you get a
chance, anyway, because of the fact it reduces the latency of
transactions significantly...
Cheers,
Dave.
--
Dave Chinner
david at fromorbit.com
More information about the xfs
mailing list