[Top] [All Lists]

Re: XFS problem

To: Kelbel Junior <jymmyjr@xxxxxxxxx>
Subject: Re: XFS problem
From: Christoph Hellwig <hch@xxxxxxxxxxxxx>
Date: Fri, 27 Jan 2012 05:58:59 -0500
Cc: Christoph Hellwig <hch@xxxxxxxxxxxxx>, linux-kernel@xxxxxxxxxxxxxxx, xfs@xxxxxxxxxxx
In-reply-to: <CAAA8XhMf9BhZP5csLfGVVbQxN7Uqh-+R6aqCLfezdiTECaRRjg@xxxxxxxxxxxxxx>
References: <CAAA8XhOXuszcyCaMOcVWb-erGAJQdhBHCX-gsJ4KpH+Td7+bPQ@xxxxxxxxxxxxxx> <20120124213936.GA1505@xxxxxxxxxxxxx> <CAAA8XhMf9BhZP5csLfGVVbQxN7Uqh-+R6aqCLfezdiTECaRRjg@xxxxxxxxxxxxxx>
User-agent: Mutt/1.5.21 (2010-09-15)
On Thu, Jan 26, 2012 at 01:57:34PM -0200, Kelbel Junior wrote:
> Well, in 24/01 i upgrade to the kernel 3.2.1 in one server and forgot
> read my emails...until now it's running without problems.
> Then, yesterday i put kernel 3.2.1 on another server and this morning
> a several same messages "server013 kernel: XFS: possible memory
> allocation deadlock in kmem_alloc (mode:0x250)" and delays.
> (without call trace because i forgot to apply that patch with dump_stack)

Not a problem, we know where it comes from now.

> I work in a cache solution company, so I/O performance is very
> important in our context.

What I'm really curious about is what kind of workloads you have.  We
should only run into problems here if we have a huge extent indirection
array, which points to a massively fragmented file.  Right now the
handling of that isn't optimal, and we need to improve on that.  But
you'd probably get better results by avoiding that massive fragmentaion
in the first place, e.g. try to preallocate or set extent size hints
if you do random writes to a sparse file.

<Prev in Thread] Current Thread [Next in Thread>