xfs
[Top] [All Lists]

Re: Lockdep for 3.10.0+ for rm of kernel git...

To: "Michael L. Semon" <mlsemon35@xxxxxxxxx>
Subject: Re: Lockdep for 3.10.0+ for rm of kernel git...
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Mon, 15 Jul 2013 11:24:41 +1000
Cc: xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <51E1F1F4.40500@xxxxxxxxx>
References: <51E1F1F4.40500@xxxxxxxxx>
User-agent: Mutt/1.5.21 (2010-09-15)
On Sat, Jul 13, 2013 at 08:33:56PM -0400, Michael L. Semon wrote:
> Hi!  Here's a lockdep that happened while executing an `rm -r linux` to 
> remove an old kernel git directory.  This is for a git 3.10.0+ kernel 
> on a non-CRC XFS filesystem that's less than a week old.
> 
> I'm not getting lockdep reports like this unless I'm minding my own 
> business.  xfstests could be running until there's a faint burning 
> electrical smell in the room, and this won't show up.  But if all I'm 
> doing is removing things to prepare for the next xfstests session or 
> git activity, then this lockdep will show up.  I'm not sure if I get 
> the same lockdep every time, but it's related to deletes somehow, and 
> it's newer than the production 3.10 kernel, AFAIK.
> 
> For the lockdeps, this pattern is prominent...
> 
>        CPU0
>        ----
>   lock(&(&ip->i_lock)->mr_lock);
>   <Interrupt>
>     lock(&(&ip->i_lock)->mr_lock);
> 
> ...and lockdep hasn't suggested the SMP scenario on XFS in some time.
> 
> There does seem to be some new lockdep work in the kernel, so maybe 
> it's not a regression but something else.
.....
>  [<c10a3d44>] __get_free_pages+0x1c/0x37
>  [<c1025dc4>] pte_alloc_one_kernel+0x14/0x16
>  [<c10b7716>] __pte_alloc_kernel+0x16/0x71
>  [<c10c0f27>] vmap_page_range_noflush+0x12c/0x13a
>  [<c10c1fdb>] vm_map_ram+0x32c/0x3d7
>  [<c10c1d21>] ? vm_map_ram+0x72/0x3d7
>  [<c1171d3b>] _xfs_buf_map_pages+0x5b/0xe1

It's vmap related. Again. Ignore it.

On the bright side, I think I've found precedence in the kernel for
getting this fixed, so stay tuned....

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>