[Top] [All Lists]

Re: Strange behavior on the 2.4.18 XFS tree?

To: Michael Sinz <msinz@xxxxxxxxx>
Subject: Re: Strange behavior on the 2.4.18 XFS tree?
From: Steve Lord <lord@xxxxxxx>
Date: 20 May 2002 12:38:48 -0500
Cc: Keith Owens <kaos@xxxxxxx>, linux-xfs@xxxxxxxxxxx
In-reply-to: <3CE8ED3B.2090401@wgate.com>
References: <22203.1021466116@ocs3.intra.ocs.com.au> <3CE8ED3B.2090401@wgate.com>
Sender: owner-linux-xfs@xxxxxxxxxxx
On Mon, 2002-05-20 at 07:34, Michael Sinz wrote:
> Keith Owens wrote:
> > On Wed, 15 May 2002 07:30:09 -0400, 
> > Michael Sinz <msinz@xxxxxxxxx> wrote:
> > 
> >>I have only been able to see this behavior on my system running the
> >>kernel from the 2.4 XFS CVS tree.  It is somewhat reproduceable.
> >>
> >>The problem is that I started up a number of "rxvt" sessions and they
> >>took well over a minute to start up.  During that time, the cpu
> >>utilization was almost nil.  (Less than 1%)
> >>
> >>There was a background find operation happening, and this seems to be
> >>the key item.
> > 
> > When the problem occurs, does typing sync get everything running again?
> > I have been seeing an intermittent lock problem with XFS where
> > everything stops until some other disk activity kicks in and the lock
> > is released.
> A build of 2.4.18-XFS from CVS on Friday night seems to not show the
> problem.  At least not over the last 2+ days.
> It could also be that I did not run into it either.  Just providing
> some more information as it develops :-)

Keep an eye on this problem, the code changes which went in for the
multiple blocksize support will affect when we sync data to disk. 
I think could well have a positive effect on what ever is happening
to you.


> -- 
> Michael Sinz ---- Worldgate Communications ---- msinz@xxxxxxxxx
> A master's secrets are only as good as
>       the master's ability to explain them to others.

Steve Lord                                      voice: +1-651-683-3511
Principal Engineer, Filesystem Software         email: lord@xxxxxxx

<Prev in Thread] Current Thread [Next in Thread>