[Top] [All Lists]

Re: inode_permission NULL pointer dereference in 3.13-rc1

To: Al Viro <viro@xxxxxxxxxxxxxxxxxx>
Subject: Re: inode_permission NULL pointer dereference in 3.13-rc1
From: Christoph Hellwig <hch@xxxxxxxxxxxxx>
Date: Wed, 27 Nov 2013 02:09:06 -0800
Cc: Christoph Hellwig <hch@xxxxxxxxxxxxx>, linux-fsdevel@xxxxxxxxxxxxxxx, xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <20131127064351.GN10323@xxxxxxxxxxxxxxxxxx>
References: <20131124140413.GA19271@xxxxxxxxxxxxx> <20131124152758.GL10323@xxxxxxxxxxxxxxxxxx> <20131125160648.GA4933@xxxxxxxxxxxxx> <20131126131134.GM10323@xxxxxxxxxxxxxxxxxx> <20131126141253.GA28062@xxxxxxxxxxxxx> <20131127064351.GN10323@xxxxxxxxxxxxxxxxxx>
User-agent: Mutt/1.5.21 (2010-09-15)
On Wed, Nov 27, 2013 at 06:43:51AM +0000, Al Viro wrote:
> On Tue, Nov 26, 2013 at 06:12:53AM -0800, Christoph Hellwig wrote:
> > On Tue, Nov 26, 2013 at 01:11:34PM +0000, Al Viro wrote:
> > > .config, please - all I'm seeing on mine is a bloody awful leak somewhere
> > > in VM that I'd been hunting for last week, so the damn thing gets OOMed
> > > halfway through xfstests run ;-/
> > #
> > # Automatically generated file; DO NOT EDIT.
> > # Linux/x86 3.12.0-hubcap2 Kernel Configuration
> [snip]
> Could you post the output of your xfstests run?  FWIW, with your .config
> I'm seeing the same leak (shut down by turning spinlock debugging off,
> it's split page table locks that end up leaking when they are separately
> allocated) *and* xfs/253 seems to be sitting there indefinitely once
> we get to it - about 100% system time, no blocked processes, xfs_db running
> all the time for hours.  No oopsen on halt with that sucker skipped *or*
> interrupted halfway through.

Might be that your xfsprogs is old enough that it has a bug that test
wants to verify is fixed.

> Setup is kvm on 3.3GHz amd64 6-core, with 4Gb given to guest (after having
> one too many OOMs on leaks).  virtio disk, with raw image sitting in a file
> on host, xfstests from current git, squeeze/amd64 userland on guest.
> Reasonably fast host disks (not that the sucker had been IO-bound, anyway).
> Tried both with UP and 4-way SMP guest, same picture on both...

I'm running on my laptop with a Dual Core 2.5Ghz i5, on preallocated
raw files on XFS on an older Intel SSD. Qemu command line:

kvm \
        -m 2048 \
        -smp 4 \
        -kernel arch/x86/boot/bzImage \
        -append "root=/dev/vda console=tty0
        console=ttyS0,115200n8" \
        -nographic \
if=virtio,file=/work/images/debian.qcow2,cache=none,serial="test1234" \
        -drive if=virtio,file=/work/images/test.img,cache=none,aio=native \
        -drive if=virtio,file=/work/images/scratch.img,cache=none,aio=native

It's probably enough to run
./check with -g quick to reproduce it, too - let me verify that which
I'd have to do to catch the output anyway.

Also if you want to look me into something else feel free - it's very
reproducable here.  Wish I could be more help here, but with all the
RCU and micro optimizations in the path lookup code I can't claim to
really understand it anymore.

<Prev in Thread] Current Thread [Next in Thread>