xfs
[Top] [All Lists]

Re: Loopback device hung [was Re: xfs deadlock on 3.9-rc5 running xfstes

To: Jens Axboe <axboe@xxxxxxxxx>
Subject: Re: Loopback device hung [was Re: xfs deadlock on 3.9-rc5 running xfstests case #78]
From: CAI Qian <caiqian@xxxxxxxxxx>
Date: Tue, 2 Apr 2013 05:31:06 -0400 (EDT)
Cc: Dave Chinner <david@xxxxxxxxxxxxx>, xfs@xxxxxxxxxxx, LKML <linux-kernel@xxxxxxxxxxxxxxx>
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <20130402090047.GF3670@xxxxxxxxx>
References: <1462091996.435156.1364882416199.JavaMail.root@xxxxxxxxxx> <247719576.438259.1364882929749.JavaMail.root@xxxxxxxxxx> <20130402070537.GP6369@dastard> <20130402071937.GC3670@xxxxxxxxx> <20130402073035.GD3670@xxxxxxxxx> <14055702.547701.1364891947331.JavaMail.root@xxxxxxxxxx> <20130402090047.GF3670@xxxxxxxxx>
Thread-index: VFaXVt88xXrJyY97wdDxyGpExk9LwQ==
Thread-topic: Loopback device hung [was Re: xfs deadlock on 3.9-rc5 running xfstests case #78]

----- Original Message -----
> From: "Jens Axboe" <axboe@xxxxxxxxx>
> To: "CAI Qian" <caiqian@xxxxxxxxxx>
> Cc: "Dave Chinner" <david@xxxxxxxxxxxxx>, xfs@xxxxxxxxxxx, "LKML" 
> <linux-kernel@xxxxxxxxxxxxxxx>
> Sent: Tuesday, April 2, 2013 5:00:47 PM
> Subject: Re: Loopback device hung [was Re: xfs deadlock on 3.9-rc5 running 
> xfstests case #78]
> 
> On Tue, Apr 02 2013, CAI Qian wrote:
> > 
> > 
> > ----- Original Message -----
> > > From: "Jens Axboe" <axboe@xxxxxxxxx>
> > > To: "Dave Chinner" <david@xxxxxxxxxxxxx>
> > > Cc: "CAI Qian" <caiqian@xxxxxxxxxx>, xfs@xxxxxxxxxxx, "LKML"
> > > <linux-kernel@xxxxxxxxxxxxxxx>
> > > Sent: Tuesday, April 2, 2013 3:30:35 PM
> > > Subject: Re: Loopback device hung [was Re: xfs deadlock on 3.9-rc5
> > > running xfstests case #78]
> > > 
> > > On Tue, Apr 02 2013, Jens Axboe wrote:
> > > > On Tue, Apr 02 2013, Dave Chinner wrote:
> > > > > [Added jens Axboe to CC]
> > > > > 
> > > > > On Tue, Apr 02, 2013 at 02:08:49AM -0400, CAI Qian wrote:
> > > > > > Saw on almost all the servers range from x64, ppc64 and s390x with
> > > > > > kernel
> > > > > > 3.9-rc5 and xfsprogs-3.1.10. Never caught this in 3.9-rc4, so looks
> > > > > > like
> > > > > > something new broke this. Log is here with sysrq debug info.
> > > > > > http://people.redhat.com/qcai/stable/log
> > > > 
> > > > CAI Qian, can you try and back the below out and test again?
> > > 
> > > Nevermind, it's clearly that one. The below should improve the
> > > situation, but it's not pretty. A better fix would be to allow
> > > auto-deletion even if PART_NO_SCAN is set.
> > Jens, when compiled the mainline (up to fefcdbe) with this patch,
> > it error-ed out,
> 
> Looks like I sent the wrong one, updated below.
The patch works well. Thanks!
CAI Qian
> 
> diff --git a/drivers/block/loop.c b/drivers/block/loop.c
> index fe5f640..faa3afa 100644
> --- a/drivers/block/loop.c
> +++ b/drivers/block/loop.c
> @@ -1057,14 +1057,15 @@ static int loop_clr_fd(struct loop_device *lo)
>               struct disk_part_iter piter;
>               struct hd_struct *part;
>  
> -             mutex_lock_nested(&bdev->bd_mutex, 1);
> -             invalidate_partition(bdev->bd_disk, 0);
> -             disk_part_iter_init(&piter, bdev->bd_disk,
> -                                     DISK_PITER_INCL_EMPTY);
> -             while ((part = disk_part_iter_next(&piter)))
> -                     delete_partition(bdev->bd_disk, part->partno);
> -             disk_part_iter_exit(&piter);
> -             mutex_unlock(&bdev->bd_mutex);
> +             if (mutex_trylock(&bdev->bd_mutex)) {
> +                     invalidate_partition(bdev->bd_disk, 0);
> +                     disk_part_iter_init(&piter, bdev->bd_disk,
> +                                             DISK_PITER_INCL_EMPTY);
> +                     while ((part = disk_part_iter_next(&piter)))
> +                             delete_partition(bdev->bd_disk, part->partno);
> +                     disk_part_iter_exit(&piter);
> +                     mutex_unlock(&bdev->bd_mutex);
> +             }
>       }
>  
>       /*
> 
> --
> Jens Axboe
> 
> 

<Prev in Thread] Current Thread [Next in Thread>