xfs
[Top] [All Lists]

Re: Marking inode dirty latency > 1000 msec on XFS!

To: Török Edwin <edwintorok@xxxxxxxxx>
Subject: Re: Marking inode dirty latency > 1000 msec on XFS!
From: David Chinner <dgc@xxxxxxx>
Date: Sat, 23 Feb 2008 11:06:12 +1100
Cc: lachlan@xxxxxxx, Arjan van de Ven <arjan@xxxxxxxxxxxxxxx>, xfs@xxxxxxxxxxx, David Chinner <dgc@xxxxxxx>
In-reply-to: <47BEA1E7.3010107@xxxxxxxxx>
References: <47B5DD9C.3080906@xxxxxxxxx> <47BE6C5C.2000605@xxxxxxx> <47BE8EE8.5020005@xxxxxxxxx> <47BEA1E7.3010107@xxxxxxxxx>
Sender: xfs-bounce@xxxxxxxxxxx
User-agent: Mutt/1.4.2.1i
On Fri, Feb 22, 2008 at 12:20:23PM +0200, Török Edwin wrote:
> Török Edwin wrote:
> >> What would be useful here is the
> >> average latency time.  The average might actually be quite low but if
> >> just
> >> once we have a maximum that is unusually large then just looking at that
> >> figure can be misleading.
> >>     
> >
> > I'll try to collect the raw numbers from /proc/latency_stats, that
> > contain a count, total time, and max time.
> 
> I was not able to reproduce the 1 second latency with David Chinner's
> reduce xaild wakeups patch, the maximum latency  I got was 685 msec.
.....
> 
> <count> <sum> <maximum> <stacktrace>
> ----------------
> 2 47699 36897 xfs_buf_free default_wake_function xfs_buf_lock xfs_getsb
....
> 
> Average = 478072 usecs
> -----------------
.....
> -----------
> 1 685021 685021 xfs_buf_free default_wake_function xfs_buf_lock
> xfs_getsb xfs_trans_getsb xfs_trans_apply_sb_deltas _xfs_trans_commit
......

Note the xfs_getsb() call in there - that's what the lazy-count option
avoids. That's waiting in the transaction subsystem to apply delta's
to the superblock that is currently locked.

Converting to a lazy-count filesystem is experimental right now;
it may eat your data. If you still want to try, apply the patch
in this email and convert it:

http://oss.sgi.com/archives/xfs/2008-02/msg00295.html

Cheers,

Dave.
-- 
Dave Chinner
Principal Engineer
SGI Australian Software Group


<Prev in Thread] Current Thread [Next in Thread>