[Top] [All Lists]

Re: xfs mount fails 'can't read superblock'

To: xfs@xxxxxxxxxxx
Subject: Re: xfs mount fails 'can't read superblock'
From: Richard Neuboeck <hawk@xxxxxxxxxxxxxxxx>
Date: Wed, 03 Oct 2012 11:18:03 +0200
In-reply-to: <505D85CF.9070701@xxxxxxxxxxxxxxxx>
References: <50583590.7060702@xxxxxxxxxxxxxxxx> <20120918205508.GA31501@dastard> <505D85CF.9070701@xxxxxxxxxxxxxxxx>
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:15.0) Gecko/20120907 Thunderbird/15.0.1

just in case someone stumbles over this thread in search for help. I
found a solution.

I upgraded the host machine from Ubuntu lucid to Ubuntu precise and
expected the same mounting behavior like in the virtual machine running
'precise'. However on the host no error message showed up.

Long story short after some test on the virtual machine configuration
(libvirt, qemu/kvm) it turned out to be the 'caching=none' option. After
changing this to <driver name='qemu' type='raw' cache='default'/>
everything works.


On 9/22/12 11:33 AM, Richard Neuboeck wrote:
> Hi Dave,
> thanks for your help! You are absolutely right.
> On 9/18/12 10:55 PM, Dave Chinner wrote:
>> On Tue, Sep 18, 2012 at 10:49:20AM +0200, Richard Neuboeck wrote:
>>> I've a XFS related problem that boggles my find and I couldn't find a
>>> solution yet.
>>> I've got a virtual machine (huddle) that gets a ~66TB logical volume
>>> from the host handed as (virtio) block device (/dev/vdb). For ease of
>>> maintenance I didn't partition the device but formatted it directly with
>>> xfs. The system at the time of formatting was Ubuntu Lucid 64bit.
>> virtio configuration? (i.e. cache=none?)
> Yes. Virtio and cache=none.
>>> A few days ago I upgraded the virtual machine to Ubuntu LTS 'precise',
>>> Kernel 3.2, and got the following error while trying to mount the device:
>> Upgraded from what?
> Ubtunu Lucid.
>>> root@huddle:~# mount /dev/vdb /mnt/storage
>>> mount: /dev/vdb: can't read superblock
>>> dmesg shows some more info:
>>> root@huddle:~# dmesg | tail
>>> [  672.774206] end_request: I/O error, dev vdb, sector 0
>>> [  672.774393] XFS (vdb): SB buffer read failed
>>> At first I thought the block device had some error and checked the
>>> virtual machine configuration and host system.
>>> From the host system (Ubuntu lucid 64bit, Kernel 2.6) I can still mount
>>> the xfs formatted device without problems. I also ran xfs_repair -n that
>>> didn't show any problem.
>> So the filesystem is accesible via direct IO from the host. What's
>> the xfs_info output once it is mounted?
> xfs_info on the host system shows a 4K sector size:
> root@wirt2:~# xfs_info /mnt/temp/
> meta-data=/dev/mapper/storage-huddle isize=256    agcount=66,
> agsize=268435455 blks
>          =                       sectsz=4096  attr=2
> data     =                       bsize=4096   blocks=17572571136, imaxpct=5
>          =                       sunit=0      swidth=0 blks
> naming   =version 2              bsize=4096   ascii-ci=0
> log      =internal               bsize=4096   blocks=521728, version=2
>          =                       sectsz=4096  sunit=1 blks, lazy-count=1
> realtime =none                   extsz=4096   blocks=0, rtextents=0
>>> I tried to hand the virual machine a different ext4 formated block
>>> device (also without partition and preformatted). This didn't yield any
>>> mount problems.
>>> The Ubuntu 'precise' machine has an older kernel (2.6.32-42) too.
>>> Booting this kernel the xfs formatted block device gets mounted without
>>> error.
>> The newer kernel has a different buffer cache implementation, so
>> sector sized IO (such as superblocks) is cached and issued
>> differently.
>>> The curious part is that it is still possible to mount the volume under
>>> Kernel 3.2 without error using the loop option:
>>> root@huddle:~# mount -v -t xfs -o loop /dev/vdb /mnt/storage/
>> Turns all IO into pagecache based IO, so 4k aligned. Will avoid any
>> sector size mismatch issues.
>>> Trying xfs_repair also brings up the I/O Error unless I use it with the
>>> -f option under Kernel 3.2.
>> -f can turn direct IO into buffered IO is there is a sector size
>> mismatch between the filesystem and the underlying storage.
>>> Obviously the problem is Kernel 3.2 related. I'm not sure if I'm at the
>>> right place in the XFS Mailinglist but thought it would make a good
>>> starting point since I couldn't find anything related in bugzilla or the
>>> web in general and the problem didn't show up using ext4 (so may not be
>>> a generic kernel problem).
>> Sounds like a sector size based problem to me - direct Io does
>> sector aligned and sized IO, buffered IO does page sized IO. So my
>> initial thought is that you've got a 512 byte sector filesystem on a
>> 4k sector device....
>>> Running any kernel, blkid still identifies the device correctly as xfs
>>> volume:
>>> root@huddle:~# blkid /dev/vdb
>>> /dev/vdb: UUID="5adcd575-d3f2-48c3-81de-104f125b275e" TYPE="xfs"
>> Buffered IO, again.
> I verified it with a 512B sector loop back device which works fine with
> xfs (or whatever the file system choice is).
> I can only nod my head in shame. Though I know this may be the wrong
> mailinglist for my problem I have absolutely no idea how to proceed.
> Googling didn't reveal the holy grail yet. Is there a way to realign the
> filesystem (without loosing the data on it)?
> Thanks!
> Richard
>> Cheers,
>> Dave.
>> _______________________________________________
>> xfs mailing list
>> xfs@xxxxxxxxxxx
>> http://oss.sgi.com/mailman/listinfo/xfs

Attachment: signature.asc
Description: OpenPGP digital signature

<Prev in Thread] Current Thread [Next in Thread>