[Top] [All Lists]

Re: [BUG] XFS: Assertion failed: atomic_read(&pag->pag_ref) == 0, file:

To: Arkadiusz BubaÅa <arkadiusz.bubala@xxxxxxxxxx>
Subject: Re: [BUG] XFS: Assertion failed: atomic_read(&pag->pag_ref) == 0, file: fs/xfs/xfs_mount.c, line: 272
From: Eric Sandeen <sandeen@xxxxxxxxxxx>
Date: Wed, 22 May 2013 09:04:48 -0500
Cc: xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <519C7D97.50909@xxxxxxxxxx>
References: <519B6738.9030603@xxxxxxxxxx> <20130521233937.GX29466@dastard> <519C7D97.50909@xxxxxxxxxx>
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:17.0) Gecko/20130509 Thunderbird/17.0.6
On 5/22/13 3:11 AM, Arkadiusz BubaÅa wrote:
> Hello,
>> On Tue, May 21, 2013 at 02:23:20PM +0200, Arkadiusz BubaÅa wrote:
>>> Hello,
>>> I've got a call trace which should be fixed by "drop buffer io
>>> reference when a bad bio is built" patch
>>> (http://patchwork.xfs.org/patch/3956/). Error occured on already
>>> patched Linux kernel 3.2.42.
>> That's an old kernel. Can you reproduce on a current TOT kernel?
>> It's entirely possible that this problem has been fixed as we
>> definitely mae some changes to the mount error handling path since
>> 3.2....
> Ok. I'll try.
>>> Test environment consist two machines target and initiator.
>>> First machine works as target with QLogic Corp. ISP2432-based 4Gb
>>> Fibre Channel device. Storage is placed on two KINGSTON SNV425S SSD
>>> working as RAID0 array. RAID is managed by LSI MegaRAID SAS 1068
>>> controller.
>>> Second machine works as initiator with the same QLogic card.
>>> After few days of running test script I got following call trace and
>>> XFS stopped working.
>> Can you narrow this down from "takes several days" to the simplest
>> possible reproducer? It happened due to IO errors during mount, so
>> maybe you can did that part out of your script and give us a test
>> case that reproduces on the first mount?
> I 'll try. This errors occurs only on heavy load.
> Is there any possibility to simulate I/O errors on XFS filesystem?

You can use something like a dm-flakey or md-faulty block devices perhaps.


<Prev in Thread] Current Thread [Next in Thread>