<div dir="ltr">Hi Leslie,<div><br></div><div>You really don't want to be running "green" anything in an array... that is a ticking time bomb just waiting to go off... let me tell you... At my installation, a predecessor had procured a large number of green drives because they were very inexpensive and regrets were had by all. Lousy performance, lots of spurious ejection/RAID gremlins and the failure rate on the WDC Greens is just appalling...</div><div><br></div><div>BBWC stands for Battery Backed Write Cache; this is a feature of hardware RAID cards; it is just like it says on the tin; a bit (usually half a gig, or a gig, or two...) of nonvolatile cache that retains writes to the array in case of power failure, etc. If you have BBWC enabled but your battery is dead, bad things can happen. Not applicable for JBOD software RAID.</div><div><br></div><div>I hold firm to my beliefs on xfs_repair :) As I say, you'll see a variety of opinions here. </div><div><br></div><div>Best,</div><div><br></div><div>Sean</div><div><br></div><div><br></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Sep 9, 2014 at 9:12 PM, Leslie Rhorer <span dir="ltr"><<a href="mailto:lrhorer@mygrande.net" target="_blank">lrhorer@mygrande.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On 9/9/2014 5:06 PM, Dave Chinner wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Fristly, more infomration is required, namely versions and actual<br>
error messages:<br>
</blockquote>
<br></span>
Indubitably:<br>
<br>
RAID-Server:/# xfs_repair -V<br>
xfs_repair version 3.1.7<br>
RAID-Server:/# uname -r<br>
3.2.0-4-amd64<br>
<br>
4.0 GHz FX-8350 eight core processor<br>
<br>
RAID-Server:/# cat /proc/meminfo /proc/mounts /proc/partitions<br>
MemTotal: 8099916 kB<br>
MemFree: 5786420 kB<br>
Buffers: 112684 kB<br>
Cached: 457020 kB<br>
SwapCached: 0 kB<br>
Active: 521800 kB<br>
Inactive: 457268 kB<br>
Active(anon): 276648 kB<br>
Inactive(anon): 140180 kB<br>
Active(file): 245152 kB<br>
Inactive(file): 317088 kB<br>
Unevictable: 0 kB<br>
Mlocked: 0 kB<br>
SwapTotal: 12623740 kB<br>
SwapFree: 12623740 kB<br>
Dirty: 20 kB<br>
Writeback: 0 kB<br>
AnonPages: 409488 kB<br>
Mapped: 47576 kB<br>
Shmem: 7464 kB<br>
Slab: 197100 kB<br>
SReclaimable: 112644 kB<br>
SUnreclaim: 84456 kB<br>
KernelStack: 2560 kB<br>
PageTables: 8468 kB<br>
NFS_Unstable: 0 kB<br>
Bounce: 0 kB<br>
WritebackTmp: 0 kB<br>
CommitLimit: 16673696 kB<br>
Committed_AS: 1010172 kB<br>
VmallocTotal: 34359738367 kB<br>
VmallocUsed: 339140 kB<br>
VmallocChunk: 34359395308 kB<br>
HardwareCorrupted: 0 kB<br>
AnonHugePages: 0 kB<br>
HugePages_Total: 0<br>
HugePages_Free: 0<br>
HugePages_Rsvd: 0<br>
HugePages_Surp: 0<br>
Hugepagesize: 2048 kB<br>
DirectMap4k: 65532 kB<br>
DirectMap2M: 5120000 kB<br>
DirectMap1G: 3145728 kB<br>
rootfs / rootfs rw 0 0<br>
sysfs /sys sysfs rw,nosuid,nodev,noexec,<u></u>relatime 0 0<br>
proc /proc proc rw,nosuid,nodev,noexec,<u></u>relatime 0 0<br>
udev /dev devtmpfs rw,relatime,size=10240k,nr_<u></u>inodes=1002653,mode=755 0 0<br>
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=<u></u>5,mode=620,ptmxmode=000 0 0<br>
tmpfs /run tmpfs rw,nosuid,noexec,relatime,<u></u>size=809992k,mode=755 0 0<br>
/dev/disk/by-uuid/fa5c404a-<u></u>bfcb-43de-87ed-e671fda1ba99 / ext4 rw,relatime,errors=remount-ro,<u></u>user_xattr,barrier=1,data=<u></u>ordered 0 0<br>
tmpfs /run/lock tmpfs rw,nosuid,nodev,noexec,<u></u>relatime,size=5120k 0 0<br>
tmpfs /run/shm tmpfs rw,nosuid,nodev,noexec,<u></u>relatime,size=4144720k 0 0<br>
/dev/md1 /boot ext2 rw,relatime,errors=continue 0 0<br>
rpc_pipefs /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0<br>
Backup:/Backup /Backup nfs rw,relatime,vers=3,rsize=<u></u>524288,wsize=524288,namlen=<u></u>255,hard,proto=tcp,timeo=600,<u></u>retrans=2,sec=sys,mountaddr=<u></u>192.168.1.51,mountvers=3,<u></u>mountport=39597,mountproto=<u></u>tcp,local_lock=none,addr=192.<u></u>168.1.51 0 0<br>
Backup:/var/www /var/www/backup nfs rw,relatime,vers=3,rsize=<u></u>524288,wsize=524288,namlen=<u></u>255,hard,proto=tcp,timeo=600,<u></u>retrans=2,sec=sys,mountaddr=<u></u>192.168.1.51,mountvers=3,<u></u>mountport=39597,mountproto=<u></u>tcp,local_lock=none,addr=192.<u></u>168.1.51 0 0<br>
/dev/md0 /RAID xfs rw,relatime,attr2,delaylog,<u></u>sunit=2048,swidth=12288,<u></u>noquota 0 0<br>
major minor #blocks name<br>
<br>
8 0 125034840 sda<br>
8 1 96256 sda1<br>
8 2 112305152 sda2<br>
8 3 12632064 sda3<br>
8 <a href="tel:16%20%20125034840" value="+16125034840" target="_blank">16 125034840</a> sdb<br>
8 17 96256 sdb1<br>
8 18 112305152 sdb2<br>
8 19 12632064 sdb3<br>
8 48 3907018584 sdd<br>
8 32 3907018584 sdc<br>
8 64 1465138584 sde<br>
8 80 1465138584 sdf<br>
8 96 1465138584 sdg<br>
8 112 3907018584 sdh<br>
8 128 3907018584 sdi<br>
8 144 3907018584 sdj<br>
8 160 3907018584 sdk<br>
9 1 96192 md1<br>
9 2 112239488 md2<br>
9 3 12623744 md3<br>
9 0 23441319936 md0<br>
9 10 4395021312 md10<br>
<br>
RAID-Server:/# cat /proc/mdstat<br>
Personalities : [raid6] [raid5] [raid4] [raid1] [raid0]<br>
md10 : active raid0 sdf[0] sde[2] sdg[1]<br>
4395021312 blocks super 1.2 512k chunks<br>
<br>
md0 : active raid6 md10[12] sdc[13] sdk[10] sdj[11] sdi[15] sdh[8] sdd[9]<br>
23441319936 blocks super 1.2 level 6, 1024k chunk, algorithm 2 [8/7] [UUU_UUUU]<br>
bitmap: 29/30 pages [116KB], 65536KB chunk<br>
<br>
md3 : active (auto-read-only) raid1 sda3[0] sdb3[1]<br>
12623744 blocks super 1.2 [3/2] [UU_]<br>
bitmap: 1/1 pages [4KB], 65536KB chunk<br>
<br>
md2 : active raid1 sda2[0] sdb2[1]<br>
112239488 blocks super 1.2 [3/2] [UU_]<br>
bitmap: 1/1 pages [4KB], 65536KB chunk<br>
<br>
md1 : active raid1 sda1[0] sdb1[1]<br>
96192 blocks [3/2] [UU_]<br>
bitmap: 1/1 pages [4KB], 65536KB chunk<br>
<br>
unused devices: <none><br>
<br>
Six of the drives are 4T spindles (a mixture of makes and models). The three drives comprising MD10 are WD 1.5T green drives. These are in place to take over the function of one of the kicked 4T drives. Md1, 2, and 3 are not data drives and are not suffering any issue.<br>
<br>
I'm not sure what is meant by "write cache status" in this context. The machine has been rebooted more than once during recovery and the FS has been umounted and xfs_repair run several times.<br>
<br>
I don't know for what the acronym BBWC stands.<br>
<br>
RAID-Server:/# xfs_info /dev/md0<br>
meta-data=/dev/md0 isize=256 agcount=43, agsize=137356288 blks<br>
= sectsz=512 attr=2<br>
data = bsize=4096 blocks=5860329984, imaxpct=5<br>
= sunit=256 swidth=1536 blks<br>
naming =version 2 bsize=4096 ascii-ci=0<br>
log =internal bsize=4096 blocks=521728, version=2<br>
= sectsz=512 sunit=8 blks, lazy-count=1<br>
realtime =none extsz=4096 blocks=0, rtextents=0<br>
<br>
The system performs just fine, other than the aforementioned, with loads in excess of 3Gbps. That is internal only. The LAN link is ony 1Gbps, so no external request exceeds about 950Mbps.<span class=""><br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<a href="http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F" target="_blank">http://xfs.org/index.php/XFS_<u></u>FAQ#Q:_What_information_<u></u>should_I_include_when_<u></u>reporting_a_problem.3F</a><br>
<br>
dmesg, in particular, should tell use what the corruption being<br>
encountered is when stat fails.<br>
</blockquote>
<br></span>
RAID-Server:/# ls "/RAID/DVD/Big Sleep, The (1945)/VIDEO_TS/VTS_01_1.VOB"<br>
ls: cannot access /RAID/DVD/Big Sleep, The (1945)/VIDEO_TS/VTS_01_1.VOB: Structure needs cleaning<br>
RAID-Server:/# dmesg | tail -n 30<br>
...<br>
[192173.363981] XFS (md0): corrupt dinode 41006, extent total = 1, nblocks = 0.<br>
[192173.363988] ffff8802338b8e00: 49 4e 81 b6 02 02 00 00 00 00 03 e8 00 00 03 e8 IN..............<br>
[192173.363996] XFS (md0): Internal error xfs_iformat(1) at line 319 of file /build/linux-eKuxrT/linux-3.2.<u></u>60/fs/xfs/xfs_inode.c. Caller 0xffffffffa0509318<br>
[192173.363999]<br>
[192173.364062] Pid: 10813, comm: ls Not tainted 3.2.0-4-amd64 #1 Debian 3.2.60-1+deb7u3<br>
[192173.364065] Call Trace:<br>
[192173.364097] [<ffffffffa04d3731>] ? xfs_corruption_error+0x54/0x6f [xfs]<br>
[192173.364134] [<ffffffffa0509318>] ? xfs_iread+0x9f/0x177 [xfs]<br>
[192173.364170] [<ffffffffa0508efa>] ? xfs_iformat+0xe3/0x462 [xfs]<br>
[192173.364204] [<ffffffffa0509318>] ? xfs_iread+0x9f/0x177 [xfs]<br>
[192173.364240] [<ffffffffa0509318>] ? xfs_iread+0x9f/0x177 [xfs]<br>
[192173.364268] [<ffffffffa04d6ebe>] ? xfs_iget+0x37c/0x56c [xfs]<br>
[192173.364300] [<ffffffffa04e13b4>] ? xfs_lookup+0xa4/0xd3 [xfs]<br>
[192173.364328] [<ffffffffa04d9e5a>] ? xfs_vn_lookup+0x3f/0x7e [xfs]<br>
[192173.364344] [<ffffffff81102de9>] ? d_alloc_and_lookup+0x3a/0x60<br>
[192173.364357] [<ffffffff8110388d>] ? walk_component+0x219/0x406<br>
[192173.364370] [<ffffffff81104721>] ? path_lookupat+0x7c/0x2bd<br>
[192173.364383] [<ffffffff81036628>] ? should_resched+0x5/0x23<br>
[192173.364396] [<ffffffff8134f144>] ? _cond_resched+0x7/0x1c<br>
[192173.364408] [<ffffffff8110497e>] ? do_path_lookup+0x1c/0x87<br>
[192173.364420] [<ffffffff81106407>] ? user_path_at_empty+0x47/0x7b<br>
[192173.364434] [<ffffffff813533d8>] ? do_page_fault+0x30a/0x345<br>
[192173.364448] [<ffffffff810d6a04>] ? mmap_region+0x353/0x44a<br>
[192173.364460] [<ffffffff810fe45a>] ? vfs_fstatat+0x32/0x60<br>
[192173.364471] [<ffffffff810fe590>] ? sys_newstat+0x12/0x2b<br>
[192173.364483] [<ffffffff813509f5>] ? page_fault+0x25/0x30<br>
[192173.364495] [<ffffffff81355452>] ? system_call_fastpath+0x16/0x1b<br>
[192173.364503] XFS (md0): Corruption detected. Unmount and run xfs_repair<br>
<br>
That last line, by the way, is why I ran umount and xfs_repair.<div class="HOEnZb"><div class="h5"><br>
<br>
______________________________<u></u>_________________<br>
xfs mailing list<br>
<a href="mailto:xfs@oss.sgi.com" target="_blank">xfs@oss.sgi.com</a><br>
<a href="http://oss.sgi.com/mailman/listinfo/xfs" target="_blank">http://oss.sgi.com/mailman/<u></u>listinfo/xfs</a><br>
</div></div></blockquote></div><br></div>