[Top] [All Lists]

Re: xfs_force_shutdown called from file fs/xfs/xfs_trans_buf.c

To: linux-xfs@xxxxxxxxxxx
Subject: Re: xfs_force_shutdown called from file fs/xfs/xfs_trans_buf.c
From: Richard Freeman <freemanrich@xxxxxxxxx>
Date: Mon, 4 Aug 2008 16:55:41 +0000 (UTC)
References: <B3EDBE0F860AF74BAA82EF17A7CDEDC660BE05A3@xxxxxxxxxxxxxxxxxxxxxxx> <C8CBE79A-8D1D-4151-ADFD-2C9400FAF356@xxxxxxx>
Sender: xfs-bounce@xxxxxxxxxxx
User-agent: Loom/3.14 (http://gmane.org/)
Jay Sullivan <jpspgd <at> rit.edu> writes:
> Today I upgraded to the latest stable kernel in Gentoo (2.6.23-r3) and  
> I'm still on xfsprogs 2.9.4, also the latest stable release.  A few  
> hours after rebooting to load the new kernel, I saw the following in  
> dmesg:
> ####################
> attempt to access beyond end of device
> dm-0: rw=0, want=68609558288793608, limit=8178892800

I just started getting these on an ext3 filesystem also on gentoo, with the 
latest stable kernel.  I suspect there is an lvm bug of some kind that is 
responsible.  I ran an e2fsck on the filesystem and managed to corrupt not 
only that filesystem, but also several others on the same RAID.  I'm probably 
going to have to try to salvage what I can from the no-longer-booting system 
and rebuild from scatch/backups.

Either lvm has some major bug, or somehow e2fsck is bypassing the lvm layer 
and writing directly to the drives.  It shouldn't be possible to write to one 
logical volume and modify data stored in a different logical volume on the 
same md raid-5 device.  A check of the underlying RAID turns up no issues - I 
suspect the problem is in the lvm layer.

Googling around for "access beyond end of device" turns up other reports of 
similar issues.  Obviously the problem is rare.

<Prev in Thread] Current Thread [Next in Thread>
  • Re: xfs_force_shutdown called from file fs/xfs/xfs_trans_buf.c, Richard Freeman <=