Mismatch of UUIDS on raid 5
Robert Tench
robtench at hotmail.com
Thu Nov 13 21:19:27 CST 2014
Hi there,
I am seeking some assistance with an XFS filesystem that will not mount. I have a Lacie NAS 5 Big Network 2 that has failed on me and basically I ended up losing access to my Shares and files.
While I have backed up what I could using Ufsexplorer, I can see that many folders are incomplete.
I have then set about removing the drives and connecting them to a Linux setup. $ of the drives are connected to Sata ports and the 5th is connected via a sata to usb adaptor.
Have reassembled the raid array, but when I try to mount the raid I get an error back stating that the 'structure needs cleaning.'
I performed xfs_repair -n /dev/md4 which fed back a huge amount of issues - namely the mismatch of UUID between the superblock and log files.
I am trying to save the report of this and sendf to you put having issues at the moment getting it to save the report correctly.
I am a complete Linux novice but would be interested to know if there is anyway to resolve this problem?
Regards,
Rob
Kernel Version: Linux ubuntu 3.13.0-32-generic #57-Ubuntu SMP Tue Jul 15 03:51:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
xfsprogs version: xfs_repair version 3.1.9
Dual Core
Contents of /proc/mem/info: MemTotal: 4012768 kB
MemFree: 1745572 kB
Buffers: 206952 kB
Cached: 1240484 kB
SwapCached: 0 kB
Active: 1018332 kB
Inactive: 1064536 kB
Active(anon): 637004 kB
Inactive(anon): 181272 kB
Active(file): 381328 kB
Inactive(file): 883264 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 511864 kB
SwapFree: 511864 kB
Dirty: 384 kB
Writeback: 0 kB
AnonPages: 635468 kB
Mapped: 126116 kB
Shmem: 182856 kB
Slab: 109136 kB
SReclaimable: 83716 kB
SUnreclaim: 25420 kB
KernelStack: 3176 kB
PageTables: 25476 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 2518248 kB
Committed_AS: 3101376 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 346400 kB
VmallocChunk: 34359384300 kB
HardwareCorrupted: 0 kB
AnonHugePages: 172032 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 44544 kB
DirectMap2M: 4114432 kB
contents of /proc/mounts:rootfs / rootfs rw,size=1994836k,nr_inodes=498709 0 0
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
udev /dev devtmpfs rw,relatime,size=1994852k,nr_inodes=498713,mode=755 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,nosuid,noexec,relatime,size=401280k,mode=755 0 0
/dev/sdf1 /cdrom vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro 0 0
/dev/loop0 /rofs squashfs ro,noatime 0 0
/dev/loop1 /cow ext2 rw,noatime 0 0
/cow / overlayfs rw,relatime,lowerdir=//filesystem.squashfs,upperdir=/cow 0 0
none /sys/fs/cgroup tmpfs rw,relatime,size=4k,mode=755 0 0
none /sys/fs/fuse/connections fusectl rw,relatime 0 0
none /sys/kernel/debug debugfs rw,relatime 0 0
none /sys/kernel/security securityfs rw,relatime 0 0
tmpfs /tmp tmpfs rw,nosuid,nodev,relatime 0 0
none /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0
none /run/shm tmpfs rw,nosuid,nodev,relatime 0 0
none /run/user tmpfs rw,nosuid,nodev,noexec,relatime,size=102400k,mode=755 0 0
none /sys/fs/pstore pstore rw,relatime 0 0
systemd /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,name=systemd 0 0
gvfsd-fuse /run/user/999/gvfs fuse.gvfsd-fuse rw,nosuid,nodev,relatime,user_id=999,group_id=999 0 0
Contents of /proc/partitions: major minor #blocks name
7 0 961232 loop0
7 1 4147200 loop1
8 16 2930266584 sdb
8 17 1024 sdb1
8 18 2928240776 sdb2
8 19 934 sdb3
8 20 1024 sdb4
8 21 256000 sdb5
8 22 8033 sdb6
8 23 16065 sdb7
8 24 843413 sdb8
8 25 875543 sdb9
8 26 8033 sdb10
8 48 2930266584 sdd
8 49 1024 sdd1
8 50 2928240776 sdd2
8 51 934 sdd3
8 52 1024 sdd4
8 53 256000 sdd5
8 54 8033 sdd6
8 55 16065 sdd7
8 56 843413 sdd8
8 57 875543 sdd9
8 58 8033 sdd10
8 0 2930266584 sda
8 1 1024 sda1
8 2 2928240776 sda2
8 3 934 sda3
8 4 1024 sda4
8 5 256000 sda5
8 6 8033 sda6
8 7 16065 sda7
8 8 843413 sda8
8 9 875543 sda9
8 10 8033 sda10
8 32 2930266584 sdc
8 33 1024 sdc1
8 34 2928240776 sdc2
8 35 934 sdc3
8 36 1024 sdc4
8 37 256000 sdc5
8 38 8033 sdc6
8 39 16065 sdc7
8 40 843413 sdc8
8 41 875543 sdc9
8 42 8033 sdc10
9 3 255936 md3
9 4 11712962560 md4
9 1 843328 md1
9 2 875456 md2
9 0 16000 md0
8 64 2930266584 sde
8 65 1024 sde1
8 66 2928240776 sde2
8 67 934 sde3
8 68 1024 sde4
8 69 256000 sde5
8 70 8033 sde6
8 71 16065 sde7
8 72 843413 sde8
8 73 875543 sde9
8 74 8033 sde10
8 80 30249984 sdf
8 81 30247936 sdf1
Raid Configuration - Hardware Raid from a Lacie NAS 5Big Network 2 (Raid 5)
Types of Disks: Seagate Barracuda 3000gb X 5
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://oss.sgi.com/pipermail/xfs/attachments/20141114/40d54398/attachment-0001.html>
More information about the xfs
mailing list