[root@ha2 /root]# mkfs -t xfs -f /dev/sdb1
meta-data=/dev/sdb1 isize=256 agcount=51, agsize=262144
blks
data = bsize=4096 blocks=13305828,
imaxpct=25
= sunit=0 swidth=0 blks, unwritten=0
naming =version 2 bsize=4096
log =internal log bsize=4096 blocks=1624
realtime =none extsz=65536 blocks=0, rtextents=0
[root@ha2 /root]# mount -t xfs /dev/sdb1 /mnt/raid/
[root@ha2 /root]# umount /mnt/raid/
[root@ha2 /root]# mount -t xfs /dev/sdb1 /mnt/raid/
mount: wrong fs type, bad option, bad superblock on /dev/sdb1,
or too many mounted file systems
>From /var/log/messages:
Jul 19 12:27:15 ha2 kernel: Start mounting filesystem: sd(8,17)
Jul 19 12:27:16 ha2 kernel: Ending clean XFS mount for filesystem: sd(8,17)
Jul 19 12:27:19 ha2 kernel: XFS unmount got error 16
Jul 19 12:27:19 ha2 kernel: linvfs_put_super: vfsp/0xc2ff71e0 left dangling!
Jul 19 12:27:19 ha2 kernel: VFS: Busy inodes after unmount. Self-destruct in 5
seconds. Have a nice day...
Jul 19 12:27:21 ha2 kernel: XFS: Filesystem has duplicate UUID - can't mount
This happens on a shared storage cluster with two nodes. The same thing
happens on both nodes. (I'm only using the device from one device at the
time)
linux-2.4.5 with XFS patch from 06112001.
After a reboot it works again, and I have not been able to reproduce
yet. It first happened when I was testing NFS locks, so it could be
related to that.
--
Ragnar Kjorstad
Big Storage
|