<html>
<head>
<style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 12pt;
font-family:Calibri
}
--></style></head>
<body class='hmmessage'><div dir='ltr'>Hi there,<br><br>I am seeking some assistance with an XFS filesystem that will not mount. I have a Lacie NAS 5 Big Network 2 that has failed on me and basically I ended up losing access to my Shares and files.<br><br>While I have backed up what I could using Ufsexplorer, I can see that many folders are incomplete.<br><br>I have then set about removing the drives and connecting them to a Linux setup. $ of the drives are connected to Sata ports and the 5th is connected via a sata to usb adaptor.<br><br>Have reassembled the raid array, but when I try to mount the raid I get an error back stating that the 'structure needs cleaning.'<br><br>I performed xfs_repair -n /dev/md4 which fed back a huge amount of issues - namely the mismatch of UUID between the superblock and log files.<br><br>I am trying to save the report of this and sendf to you put having issues at the moment getting it to save the report correctly.<br><br>I am a complete Linux novice but would be interested to know if there is anyway to resolve this problem?<br><br>Regards,<br>Rob <br><br>Kernel Version: Linux ubuntu 3.13.0-32-generic #57-Ubuntu SMP Tue Jul 15 03:51:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux<br><br><br>xfsprogs version: xfs_repair version 3.1.9<br><br>Dual Core<br><br>Contents of /proc/mem/info: MemTotal: 4012768 kB<br>MemFree: 1745572 kB<br>Buffers: 206952 kB<br>Cached: 1240484 kB<br>SwapCached: 0 kB<br>Active: 1018332 kB<br>Inactive: 1064536 kB<br>Active(anon): 637004 kB<br>Inactive(anon): 181272 kB<br>Active(file): 381328 kB<br>Inactive(file): 883264 kB<br>Unevictable: 0 kB<br>Mlocked: 0 kB<br>SwapTotal: 511864 kB<br>SwapFree: 511864 kB<br>Dirty: 384 kB<br>Writeback: 0 kB<br>AnonPages: 635468 kB<br>Mapped: 126116 kB<br>Shmem: 182856 kB<br>Slab: 109136 kB<br>SReclaimable: 83716 kB<br>SUnreclaim: 25420 kB<br>KernelStack: 3176 kB<br>PageTables: 25476 kB<br>NFS_Unstable: 0 kB<br>Bounce: 0 kB<br>WritebackTmp: 0 kB<br>CommitLimit: 2518248 kB<br>Committed_AS: 3101376 kB<br>VmallocTotal: 34359738367 kB<br>VmallocUsed: 346400 kB<br>VmallocChunk: 34359384300 kB<br>HardwareCorrupted: 0 kB<br>AnonHugePages: 172032 kB<br>HugePages_Total: 0<br>HugePages_Free: 0<br>HugePages_Rsvd: 0<br>HugePages_Surp: 0<br>Hugepagesize: 2048 kB<br>DirectMap4k: 44544 kB<br>DirectMap2M: 4114432 kB<br><br>contents of /proc/mounts:rootfs / rootfs rw,size=1994836k,nr_inodes=498709 0 0<br>sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0<br>proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0<br>udev /dev devtmpfs rw,relatime,size=1994852k,nr_inodes=498713,mode=755 0 0<br>devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0<br>tmpfs /run tmpfs rw,nosuid,noexec,relatime,size=401280k,mode=755 0 0<br>/dev/sdf1 /cdrom vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro 0 0<br>/dev/loop0 /rofs squashfs ro,noatime 0 0<br>/dev/loop1 /cow ext2 rw,noatime 0 0<br>/cow / overlayfs rw,relatime,lowerdir=//filesystem.squashfs,upperdir=/cow 0 0<br>none /sys/fs/cgroup tmpfs rw,relatime,size=4k,mode=755 0 0<br>none /sys/fs/fuse/connections fusectl rw,relatime 0 0<br>none /sys/kernel/debug debugfs rw,relatime 0 0<br>none /sys/kernel/security securityfs rw,relatime 0 0<br>tmpfs /tmp tmpfs rw,nosuid,nodev,relatime 0 0<br>none /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0<br>none /run/shm tmpfs rw,nosuid,nodev,relatime 0 0<br>none /run/user tmpfs rw,nosuid,nodev,noexec,relatime,size=102400k,mode=755 0 0<br>none /sys/fs/pstore pstore rw,relatime 0 0<br>systemd /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,name=systemd 0 0<br>gvfsd-fuse /run/user/999/gvfs fuse.gvfsd-fuse rw,nosuid,nodev,relatime,user_id=999,group_id=999 0 0<br><br><br>Contents of /proc/partitions: major minor #blocks name<br><br> 7 0 961232 loop0<br> 7 1 4147200 loop1<br> 8 16 2930266584 sdb<br> 8 17 1024 sdb1<br> 8 18 2928240776 sdb2<br> 8 19 934 sdb3<br> 8 20 1024 sdb4<br> 8 21 256000 sdb5<br> 8 22 8033 sdb6<br> 8 23 16065 sdb7<br> 8 24 843413 sdb8<br> 8 25 875543 sdb9<br> 8 26 8033 sdb10<br> 8 48 2930266584 sdd<br> 8 49 1024 sdd1<br> 8 50 2928240776 sdd2<br> 8 51 934 sdd3<br> 8 52 1024 sdd4<br> 8 53 256000 sdd5<br> 8 54 8033 sdd6<br> 8 55 16065 sdd7<br> 8 56 843413 sdd8<br> 8 57 875543 sdd9<br> 8 58 8033 sdd10<br> 8 0 2930266584 sda<br> 8 1 1024 sda1<br> 8 2 2928240776 sda2<br> 8 3 934 sda3<br> 8 4 1024 sda4<br> 8 5 256000 sda5<br> 8 6 8033 sda6<br> 8 7 16065 sda7<br> 8 8 843413 sda8<br> 8 9 875543 sda9<br> 8 10 8033 sda10<br> 8 32 2930266584 sdc<br> 8 33 1024 sdc1<br> 8 34 2928240776 sdc2<br> 8 35 934 sdc3<br> 8 36 1024 sdc4<br> 8 37 256000 sdc5<br> 8 38 8033 sdc6<br> 8 39 16065 sdc7<br> 8 40 843413 sdc8<br> 8 41 875543 sdc9<br> 8 42 8033 sdc10<br> 9 3 255936 md3<br> 9 4 11712962560 md4<br> 9 1 843328 md1<br> 9 2 875456 md2<br> 9 0 16000 md0<br> 8 64 2930266584 sde<br> 8 65 1024 sde1<br> 8 66 2928240776 sde2<br> 8 67 934 sde3<br> 8 68 1024 sde4<br> 8 69 256000 sde5<br> 8 70 8033 sde6<br> 8 71 16065 sde7<br> 8 72 843413 sde8<br> 8 73 875543 sde9<br> 8 74 8033 sde10<br> 8 80 30249984 sdf<br> 8 81 30247936 sdf1<br><br>Raid Configuration - Hardware Raid from a Lacie NAS 5Big Network 2 (Raid 5)<br><br>Types of Disks: Seagate Barracuda 3000gb X 5<br><br><br> </div></body>
</html>