<html><head><meta http-equiv="Content-Type" content="text/html charset=us-ascii"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class="">Thanks for replying. The project part is a red herring and I have abandoned it. The only reason project quotas even came up was the winbind/quota issue. UID is fine.<div class="">The more interesting part is the way the /proc/self/mounts and mtab/fstab are not coherent.</div><div class=""><br class=""></div><div class="">2 filesystems have identical (cut and paste) setting in fstab. Below results are after setting forcefsck and rebooting.</div><div class=""><br class=""></div><div class="">mount <enter></div><div class=""><div class="">/dev/mapper/irphome_vg-home_lv on /home type xfs (rw,delaylog,inode64,nobarrier,logbsize=256k,uquota,prjquota)</div><div class="">/dev/mapper/irphome_vg-imap_lv on /mail type xfs (rw,delaylog,inode64,nobarrier,logbsize=256k,uquota,prjquota)</div></div><div class=""><br class=""></div><div class=""><div class="">cat /proc/self/mounts</div><div class="">/dev/mapper/irphome_vg-home_lv /home xfs rw,relatime,attr2,delaylog,nobarrier,inode64,logbsize=256k,sunit=32,swidth=32768,noquota 0 0</div><div class="">/dev/mapper/irphome_vg-imap_lv /mail xfs rw,relatime,attr2,delaylog,nobarrier,inode64,logbsize=256k,sunit=32,swidth=32768,usrquota,prjquota 0 0</div></div><div class=""><br class=""></div><div class="">cat /etc/mtab</div><div class=""><div class="">/dev/mapper/irphome_vg-home_lv /home xfs rw,delaylog,inode64,nobarrier,logbsize=256k,uquota,prjquota 0 0</div><div class="">/dev/mapper/irphome_vg-imap_lv /mail xfs rw,delaylog,inode64,nobarrier,logbsize=256k,uquota,prjquota 0 0</div><div class=""><br class=""></div><div class="">List of details per wiki</div><div class="">#############</div><div class="">The most interesting thing in dmesg output was this:</div><div class="">XFS (dm-7): Failed to initialize disk quotas. from /dev/disk/by-id dm-7 is my problem logical volume => <b class="">dm-name-irphome_vg-home_lv</b> -> <b class="">../../dm-7</b></div><div class=""><div style="margin: 0px;" class="">#############</div><div style="margin: 0px;" class=""><br class=""></div><div style="margin: 0px;" class="">2.6.32-504.3.3.el6.x86_64</div><div style="margin: 0px; min-height: 14px;" class=""><br class=""></div><div style="margin: 0px;" class="">xfs_repair version 3.1.1</div><div style="margin: 0px; min-height: 14px;" class=""><br class=""></div><div style="margin: 0px;" class="">24 cpu using hyperthreading, so 12 real</div><div style="margin: 0px; min-height: 14px;" class=""><br class=""></div><div style="margin: 0px;" class="">mem</div><div style="margin: 0px;" class="">MemTotal: 49410148 kB</div><div style="margin: 0px;" class="">MemFree: 269628 kB</div><div style="margin: 0px;" class="">Buffers: 144256 kB</div><div style="margin: 0px;" class="">Cached: 47388884 kB</div><div style="margin: 0px;" class="">SwapCached: 0 kB</div><div style="margin: 0px;" class="">Active: 731016 kB</div><div style="margin: 0px;" class="">Inactive: 46871512 kB</div><div style="margin: 0px;" class="">Active(anon): 2976 kB</div><div style="margin: 0px;" class="">Inactive(anon): 71740 kB</div><div style="margin: 0px;" class="">Active(file): 728040 kB</div><div style="margin: 0px;" class="">Inactive(file): 46799772 kB</div><div style="margin: 0px;" class="">Unevictable: 5092 kB</div><div style="margin: 0px;" class="">Mlocked: 5092 kB</div><div style="margin: 0px;" class="">SwapTotal: 14331900 kB</div><div style="margin: 0px;" class="">SwapFree: 14331900 kB</div><div style="margin: 0px;" class="">Dirty: 3773708 kB</div><div style="margin: 0px;" class="">Writeback: 0 kB</div><div style="margin: 0px;" class="">AnonPages: 75696 kB</div><div style="margin: 0px;" class="">Mapped: 190092 kB</div><div style="margin: 0px;" class="">Shmem: 312 kB</div><div style="margin: 0px;" class="">Slab: 1012580 kB</div><div style="margin: 0px;" class="">SReclaimable: 875160 kB</div><div style="margin: 0px;" class="">SUnreclaim: 137420 kB</div><div style="margin: 0px;" class="">KernelStack: 5512 kB</div><div style="margin: 0px;" class="">PageTables: 9332 kB</div><div style="margin: 0px;" class="">NFS_Unstable: 0 kB</div><div style="margin: 0px;" class="">Bounce: 0 kB</div><div style="margin: 0px;" class="">WritebackTmp: 0 kB</div><div style="margin: 0px;" class="">CommitLimit: 39036972 kB</div><div style="margin: 0px;" class="">Committed_AS: 293324 kB</div><div style="margin: 0px;" class="">VmallocTotal: 34359738367 kB</div><div style="margin: 0px;" class="">VmallocUsed: 191424 kB</div><div style="margin: 0px;" class="">VmallocChunk: 34334431824 kB</div><div style="margin: 0px;" class="">HardwareCorrupted: 0 kB</div><div style="margin: 0px;" class="">AnonHugePages: 2048 kB</div><div style="margin: 0px;" class="">HugePages_Total: 0</div><div style="margin: 0px;" class="">HugePages_Free: 0</div><div style="margin: 0px;" class="">HugePages_Rsvd: 0</div><div style="margin: 0px;" class="">HugePages_Surp: 0</div><div style="margin: 0px;" class="">Hugepagesize: 2048 kB</div><div style="margin: 0px;" class="">DirectMap4k: 6384 kB</div><div style="margin: 0px;" class="">DirectMap2M: 2080768 kB</div><div style="margin: 0px;" class="">DirectMap1G: 48234496 kB</div><div style="margin: 0px; min-height: 14px;" class=""><br class=""></div><div style="margin: 0px;" class="">/proc/mounts</div><div style="margin: 0px;" class="">rootfs / rootfs rw 0 0</div><div style="margin: 0px;" class="">proc /proc proc rw,relatime 0 0</div><div style="margin: 0px;" class="">sysfs /sys sysfs rw,relatime 0 0</div><div style="margin: 0px;" class="">devtmpfs /dev devtmpfs rw,relatime,size=24689396k,nr_inodes=6172349,mode=755 0 0</div><div style="margin: 0px;" class="">devpts /dev/pts devpts rw,relatime,gid=5,mode=620,ptmxmode=000 0 0</div><div style="margin: 0px;" class="">tmpfs /dev/shm tmpfs rw,relatime 0 0</div><div style="margin: 0px;" class="">/dev/mapper/VolGroup-lv_root / ext4 rw,relatime,barrier=1,data=ordered 0 0</div><div style="margin: 0px;" class="">/proc/bus/usb /proc/bus/usb usbfs rw,relatime 0 0</div><div style="margin: 0px;" class="">/dev/sda1 /boot ext4 rw,relatime,barrier=1,data=ordered 0 0</div><div style="margin: 0px;" class="">/dev/mapper/irphome_vg-home_lv /home xfs rw,relatime,attr2,delaylog,nobarrier,inode64,logbsize=256k,sunit=32,swidth=32768,noquota 0 0</div><div style="margin: 0px;" class="">/dev/mapper/irphome_vg-imap_lv /mail xfs rw,relatime,attr2,delaylog,nobarrier,inode64,logbsize=256k,sunit=32,swidth=32768,usrquota,prjquota 0 0</div><div style="margin: 0px;" class="">none /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0</div><div style="margin: 0px;" class="">/dev/mapper/homesavelv-homesavelv /homesave xfs rw,relatime,attr2,delaylog,sunit=32,swidth=32768,noquota 0 0</div><div style="margin: 0px; min-height: 14px;" class=""><br class=""></div><div style="margin: 0px;" class=""> /proc/partitions</div><div style="margin: 0px;" class="">major minor #blocks name</div><div style="margin: 0px; min-height: 14px;" class=""><br class=""></div><div style="margin: 0px;" class=""> 8 0 143338560 sda</div><div style="margin: 0px;" class=""> 8 1 512000 sda1</div><div style="margin: 0px;" class=""> 8 2 142825472 sda2</div><div style="margin: 0px;" class=""> 8 32 17179869184 sdc</div><div style="margin: 0px;" class=""> 8 96 17179869184 sdg</div><div style="margin: 0px;" class=""> 8 128 17179869184 sdi</div><div style="margin: 0px;" class=""> 8 48 17179869184 sdd</div><div style="margin: 0px;" class=""> 8 112 17179869184 sdh</div><div style="margin: 0px;" class=""> 8 64 17179869184 sde</div><div style="margin: 0px;" class=""> 253 0 52428800 dm-0</div><div style="margin: 0px;" class=""> 253 1 14331904 dm-1</div><div style="margin: 0px;" class=""> 8 160 17179869184 sdk</div><div style="margin: 0px;" class=""> 8 176 17179869184 sdl</div><div style="margin: 0px;" class=""> 8 192 17179869184 sdm</div><div style="margin: 0px;" class=""> 8 224 17179869184 sdo</div><div style="margin: 0px;" class=""> 8 240 17179869184 sdp</div><div style="margin: 0px;" class=""> 65 0 17179869184 sdq</div><div style="margin: 0px;" class=""> 253 3 17179869184 dm-3</div><div style="margin: 0px;" class=""> 253 4 17179869184 dm-4</div><div style="margin: 0px;" class=""> 253 5 17179869184 dm-5</div><div style="margin: 0px;" class=""> 253 6 5368709120 dm-6</div><div style="margin: 0px;" class=""> 253 7 42949672960 dm-7</div><div style="margin: 0px;" class=""> 8 16 2147483648 sdb</div><div style="margin: 0px;" class=""> 8 80 2147483648 sdf</div><div style="margin: 0px;" class=""> 253 2 2147483648 dm-2</div><div style="margin: 0px;" class=""> 8 144 2147483648 sdj</div><div style="margin: 0px;" class=""> 8 208 2147483648 sdn</div><div style="margin: 0px;" class=""> 253 8 2147467264 dm-8</div><p style="margin: 0px; min-height: 14px;" class=""> <br class="webkit-block-placeholder"></p><div style="margin: 0px;" class=""> Raid layout</div><div style="margin: 0px;" class=""> 3Par SAN raid 6 12 4TB SAS disks (more or less, 3Par does some non-classic raid stuff)</div><p style="margin: 0px; min-height: 14px;" class=""> <br class="webkit-block-placeholder"></p><div style="margin: 0px;" class=""> <b class="">mpathd (360002ac000000000000000080000bf12) dm-4 3PARdata,VV</b></div><div style="margin: 0px;" class="">size=16T features='0' hwhandler='0' wp=rw</div><div style="margin: 0px;" class="">`-+- policy='round-robin 0' prio=1 status=active</div><div style="margin: 0px;" class=""> |- 1:0:3:12 sdi 8:128 active ready running</div><div style="margin: 0px;" class=""> |- 2:0:0:12 sdm 8:192 active ready running</div><div style="margin: 0px;" class=""> |- 1:0:2:12 sde 8:64 active ready running</div><div style="margin: 0px;" class=""> `- 2:0:5:12 sdq 65:0 active ready running</div><div style="margin: 0px;" class=""><b class="">mpathc (360002ac000000000000000070000bf12) dm-5 3PARdata,VV</b></div><div style="margin: 0px;" class="">size=16T features='0' hwhandler='0' wp=rw</div><div style="margin: 0px;" class="">`-+- policy='round-robin 0' prio=1 status=active</div><div style="margin: 0px;" class=""> |- 1:0:2:11 sdd 8:48 active ready running</div><div style="margin: 0px;" class=""> |- 2:0:0:11 sdl 8:176 active ready running</div><div style="margin: 0px;" class=""> |- 1:0:3:11 sdh 8:112 active ready running</div><div style="margin: 0px;" class=""> `- 2:0:5:11 sdp 8:240 active ready running</div><div style="margin: 0px;" class=""><b class="">mpathb (360002ac000000000000000060000bf12) dm-3 3PARdata,VV</b></div><div style="margin: 0px;" class="">size=16T features='0' hwhandler='0' wp=rw</div><div style="margin: 0px;" class="">`-+- policy='round-robin 0' prio=1 status=active</div><div style="margin: 0px;" class=""> |- 1:0:2:10 sdc 8:32 active ready running</div><div style="margin: 0px;" class=""> |- 2:0:0:10 sdk 8:160 active ready running</div><div style="margin: 0px;" class=""> |- 1:0:3:10 sdg 8:96 active ready running</div><div style="margin: 0px;" class=""> `- 2:0:5:10 sdo 8:224 active ready running</div><div style="margin: 0px;" class=""><b class="">mpathg (360002ac000000000000000110000bf12) dm-2 3PARdata,VV</b></div><div style="margin: 0px;" class="">size=2.0T features='0' hwhandler='0' wp=rw</div><div style="margin: 0px;" class="">`-+- policy='round-robin 0' prio=1 status=active</div><div style="margin: 0px;" class=""> |- 1:0:2:1 sdb 8:16 active ready running</div><div style="margin: 0px;" class=""> |- 2:0:5:1 sdj 8:144 active ready running</div><div style="margin: 0px;" class=""> |- 1:0:3:1 sdf 8:80 active ready running</div><div style="margin: 0px;" class=""> `- 2:0:0:1 sdn 8:208 active ready running</div><div style="margin: 0px; min-height: 14px;" class=""><br class=""></div><div style="margin: 0px;" class="">pvscan</div><div style="margin: 0px;" class=""> PV /dev/mapper/mpathd VG irphome_vg lvm2 [16.00 TiB / 3.00 TiB free]</div><div style="margin: 0px;" class=""> PV /dev/mapper/mpathb VG irphome_vg lvm2 [16.00 TiB / 0 free]</div><div style="margin: 0px;" class=""> PV /dev/mapper/mpathc VG irphome_vg lvm2 [16.00 TiB / 0 free]</div><div style="margin: 0px;" class=""> PV /dev/mapper/mpathg VG homesavelv lvm2 [2.00 TiB / 0 free]</div><div style="margin: 0px;" class=""> PV /dev/sda2 VG VolGroup lvm2 [136.21 GiB / 72.54 GiB free]</div><div style="margin: 0px;" class=""> Total: 5 [50.13 TiB] / in use: 5 [50.13 TiB] / in no VG: 0 [0 ]</div><div style="margin: 0px;" class=""> vgscan</div><div style="margin: 0px;" class=""> Reading all physical volumes. This may take a while...</div><div style="margin: 0px;" class=""> Found volume group "irphome_vg" using metadata type lvm2</div><div style="margin: 0px;" class=""> Found volume group "homesavelv" using metadata type lvm2</div><div style="margin: 0px;" class=""> Found volume group "VolGroup" using metadata type lvm2</div><div style="margin: 0px;" class=""> lvscan</div><div style="margin: 0px;" class=""> ACTIVE '/dev/irphome_vg/imap_lv' [5.00 TiB] inherit</div><div style="margin: 0px;" class=""> ACTIVE '/dev/irphome_vg/home_lv' [40.00 TiB] inherit</div><div style="margin: 0px;" class=""> ACTIVE '/dev/homesavelv/homesavelv' [2.00 TiB] inherit</div><div style="margin: 0px;" class=""> ACTIVE '/dev/VolGroup/lv_root' [50.00 GiB] inherit</div><div style="margin: 0px;" class=""> ACTIVE '/dev/VolGroup/lv_swap' [13.67 GiB] inherit</div><p style="margin: 0px; min-height: 14px;" class=""> <br class="webkit-block-placeholder"></p><div style="margin: 0px;" class=""> lvdisplay irphome_vg/home_lv</div><div style="margin: 0px;" class=""> --- Logical volume ---</div><div style="margin: 0px;" class=""> LV Path /dev/irphome_vg/home_lv</div><div style="margin: 0px;" class=""> LV Name home_lv</div><div style="margin: 0px;" class=""> VG Name irphome_vg</div><div style="margin: 0px;" class=""> LV UUID 8wLM12-e43p-UhIh-YTXn-kMBx-RffN-yNz2V5</div><div style="margin: 0px;" class=""> LV Write Access read/write</div><div style="margin: 0px;" class=""> LV Creation host, time <a href="http://nuhome.irp.nia.nih.gov" class="">nuhome.irp.nia.nih.gov</a>, 2014-12-01 17:53:47 -0500</div><div style="margin: 0px;" class=""> LV Status available</div><div style="margin: 0px;" class=""> # open 1</div><div style="margin: 0px;" class=""> LV Size 40.00 TiB</div><div style="margin: 0px;" class=""> Current LE 10485760</div><div style="margin: 0px;" class=""> Segments 3</div><div style="margin: 0px;" class=""> Allocation inherit</div><div style="margin: 0px;" class=""> Read ahead sectors auto</div><div style="margin: 0px;" class=""> - currently set to 256</div><div style="margin: 0px;" class=""> Block device 253:7</div><div style="margin: 0px; min-height: 14px;" class=""><br class=""></div></div><div style="margin: 0px; min-height: 14px;" class="">Disks, write cache etc are controlled by 3Par SAN, I just define up to 16TB blocks and export to host over FC or ISCSI.</div><div style="margin: 0px; min-height: 14px;" class="">In this case I am using FC.</div><div style="margin: 0px; min-height: 14px;" class=""><br class=""></div><div style="margin: 0px; min-height: 14px;" class=""><div style="margin: 0px; min-height: 14px;" class=""> xfs_info /dev/irphome_vg/home_lv </div><div style="margin: 0px; min-height: 14px;" class="">meta-data=/dev/mapper/irphome_vg-home_lv isize=256 agcount=40, agsize=268435452 blks</div><div style="margin: 0px; min-height: 14px;" class=""> = sectsz=512 attr=2, projid32bit=1</div><div style="margin: 0px; min-height: 14px;" class="">data = bsize=4096 blocks=10737418080, imaxpct=5</div><div style="margin: 0px; min-height: 14px;" class=""> = sunit=4 swidth=4096 blks</div><div style="margin: 0px; min-height: 14px;" class="">naming =version 2 bsize=4096 ascii-ci=0</div><div style="margin: 0px; min-height: 14px;" class="">log =internal bsize=4096 blocks=521728, version=2</div><div style="margin: 0px; min-height: 14px;" class=""> = sectsz=512 sunit=4 blks, lazy-count=1</div><div style="margin: 0px; min-height: 14px;" class="">realtime =none extsz=4096 blocks=0, rtextents=0</div><div style="margin: 0px; min-height: 14px;" class=""><br class=""></div><div style="margin: 0px; min-height: 14px;" class="">xfs_info /dev/irphome_vg/imap_lv </div><div style="margin: 0px; min-height: 14px;" class="">meta-data=/dev/mapper/irphome_vg-imap_lv isize=256 agcount=32, agsize=41943036 blks</div><div style="margin: 0px; min-height: 14px;" class=""> = sectsz=512 attr=2, projid32bit=1</div><div style="margin: 0px; min-height: 14px;" class="">data = bsize=4096 blocks=1342177152, imaxpct=5</div><div style="margin: 0px; min-height: 14px;" class=""> = sunit=4 swidth=4096 blks</div><div style="margin: 0px; min-height: 14px;" class="">naming =version 2 bsize=4096 ascii-ci=0</div><div style="margin: 0px; min-height: 14px;" class="">log =internal bsize=4096 blocks=521728, version=2</div><div style="margin: 0px; min-height: 14px;" class=""> = sectsz=512 sunit=4 blks, lazy-count=1</div><div style="margin: 0px; min-height: 14px;" class="">realtime =none extsz=4096 blocks=0, rtextents=0</div><div class=""><br class=""></div><div class="">dmesg output</div><div class=""><div class="">SGI XFS with ACLs, security attributes, large block/inode numbers, no debug enabled</div><div class="">SGI XFS Quota Management subsystem</div><div class="">XFS (dm-7): delaylog is the default now, option is deprecated.</div><div class="">XFS (dm-7): Mounting Filesystem</div><div class="">XFS (dm-7): Ending clean mount</div><div class="">XFS (dm-7): Failed to initialize disk quotas.</div><div class="">XFS (dm-6): delaylog is the default now, option is deprecated.</div><div class="">XFS (dm-6): Mounting Filesystem</div><div class="">XFS (dm-6): Ending clean mount</div><div class=""><br class=""></div><div class=""><br class=""></div><div class="">scsi 2:0:0:0: Direct-Access 3PARdata VV 3210 PQ: 0 ANSI: 6</div><div class="">scsi 2:0:0:10: Direct-Access 3PARdata VV 3210 PQ: 0 ANSI: 6</div><div class="">sd 2:0:0:0: [sdj] 4194304 512-byte logical blocks: (2.14 GB/2.00 GiB)</div><div class="">scsi 2:0:0:11: Direct-Access 3PARdata VV 3210 PQ: 0 ANSI: 6</div><div class="">sd 2:0:0:10: [sdk] 34359738368 512-byte logical blocks: (17.5 TB/16.0 TiB)</div><div class="">sd 2:0:0:0: [sdj] Write Protect is off</div><div class="">sd 2:0:0:0: [sdj] Mode Sense: 8b 00 10 08</div><div class="">scsi 2:0:0:12: Direct-Access 3PARdata VV 3210 PQ: 0 ANSI: 6</div><div class="">sd 2:0:0:11: [sdl] 34359738368 512-byte logical blocks: (17.5 TB/16.0 TiB)</div><div class="">sd 2:0:0:0: [sdj] Write cache: disabled, read cache: enabled, supports DPO and FUA</div><div class="">scsi 2:0:0:254: Enclosure 3PARdata SES 3210 PQ: 0 ANSI: 6</div><div class="">sd 2:0:0:12: [sdm] 34359738368 512-byte logical blocks: (17.5 TB/16.0 TiB)</div><div class="">scsi 2:0:1:0: RAID HP HSV400 0005 PQ: 0 ANSI: 5</div><div class="">scsi 2:0:2:0: RAID HP HSV400 0005 PQ: 0 ANSI: 5</div><div class=""> sdj:</div><div class="">sd 2:0:0:10: [sdk] Write Protect is off</div><div class="">sd 2:0:0:10: [sdk] Mode Sense: 8b 00 10 08</div><div class="">sd 2:0:0:11: [sdl] Write Protect is off</div><div class="">sd 2:0:0:11: [sdl] Mode Sense: 8b 00 10 08</div><div class="">scsi 2:0:3:0: RAID HP HSV400 0005 PQ: 0 ANSI: 5</div><div class=""> unknown partition table</div><div class="">sd 2:0:0:10: [sdk] Write cache: disabled, read cache: enabled, supports DPO and FUA</div><div class="">sd 2:0:0:11: [sdl] Write cache: disabled, read cache: enabled, supports DPO and FUA</div><div class="">scsi 2:0:4:0: RAID HP HSV400 0005 PQ: 0 ANSI: 5</div><div class="">sd 2:0:0:12: [sdm] Write Protect is off</div><div class="">sd 2:0:0:12: [sdm] Mode Sense: 8b 00 10 08</div><div class="">sd 2:0:0:12: [sdm] Write cache: disabled, read cache: enabled, supports DPO and FUA</div><div class="">scsi 2:0:5:0: Direct-Access 3PARdata VV 3210 PQ: 0 ANSI: 6</div><div class="">sd 2:0:5:0: [sdn] 4194304 512-byte logical blocks: (2.14 GB/2.00 GiB)</div><div class="">scsi 2:0:5:10: Direct-Access 3PARdata VV 3210 PQ: 0 ANSI: 6</div><div class="">sd 2:0:0:0: [sdj] Attached SCSI disk</div><div class="">scsi 2:0:5:11: Direct-Access 3PARdata VV 3210 PQ: 0 ANSI: 6</div><div class="">sd 2:0:5:0: [sdn] Write Protect is off</div><div class="">sd 2:0:5:0: [sdn] Mode Sense: 8b 00 10 08</div><div class="">SGI XFS with ACLs, security attributes, large block/inode numbers, no debug enabled</div><div class="">SGI XFS Quota Management subsystem</div><div class="">XFS (dm-7): delaylog is the default now, option is deprecated.</div><div class="">XFS (dm-7): Mounting Filesystem</div><div class="">XFS (dm-7): Ending clean mount</div><div class="">XFS (dm-7): Failed to initialize disk quotas.</div><div class="">XFS (dm-6): delaylog is the default now, option is deprecated.</div><div class="">XFS (dm-6): Mounting Filesystem</div><div class="">XFS (dm-6): Ending clean mount</div><div class="">Adding 14331900k swap on /dev/mapper/VolGroup-lv_swap. Priority:-1 extents:1 across:14331900k </div><div class="">device-mapper: table: 253:9: multipath: error getting device</div><div class="">device-mapper: ioctl: error adding target to table</div><div class="">pcc-cpufreq: (v1.00.00) driver loaded with frequency limits: 1600 MHz, 2400 MHz</div><div class="">sd 1:0:2:10: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatic</div><div class="">ally remap LUN assignments.</div><div class="">sd 2:0:0:10: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatic</div><div class="">ally remap LUN assignments.</div><div class="">sd 1:0:3:10: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatic</div><div class="">ally remap LUN assignments.</div><div class="">sd 2:0:5:10: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatic</div><div class="">ally remap LUN assignments.</div><div class="">device-mapper: multipath: Failing path 8:80.</div><div class="">device-mapper: multipath: Failing path 8:208.</div><div class="">device-mapper: multipath: Failing path 8:144.</div><div class="">end_request: I/O error, dev dm-2, sector 4194176</div><div class="">Buffer I/O error on device dm-2, logical block 524272</div><div class="">end_request: I/O error, dev dm-2, sector 4194176</div><div class="">Buffer I/O error on device dm-2, logical block 524272</div><div class="">end_request: I/O error, dev dm-2, sector 4194288</div><div class="">Buffer I/O error on device dm-2, logical block 524286</div><div class="">end_request: I/O error, dev dm-2, sector 4194288</div><div class="">Buffer I/O error on device dm-2, logical block 524286</div><div class="">end_request: I/O error, dev dm-2, sector 0</div><div class="">Buffer I/O error on device dm-2, logical block 0</div><div class="">end_request: I/O error, dev dm-2, sector 0</div><div class="">Buffer I/O error on device dm-2, logical block 0</div><div class="">end_request: I/O error, dev dm-2, sector 8</div><div class="">Buffer I/O error on device dm-2, logical block 1</div><div class="">end_request: I/O error, dev dm-2, sector 4194296</div><div class="">Buffer I/O error on device dm-2, logical block 524287</div><div class="">end_request: I/O error, dev dm-2, sector 4194296</div><div class="">Buffer I/O error on device dm-2, logical block 524287</div><div class="">end_request: I/O error, dev dm-2, sector 4194296</div><div class="">device-mapper: table: 253:2: multipath: error getting device</div><div class="">device-mapper: ioctl: error adding target to table</div><div class="">device-mapper: table: 253:2: multipath: error getting device</div><div class="">device-mapper: ioctl: error adding target to table</div><div class="">sd 1:0:3:10: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatically remap LUN assignments.</div><div class="">sd 1:0:2:10: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatically remap LUN assignments.</div><div class="">sd 2:0:5:10: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatically remap LUN assignments.</div><div class="">sd 2:0:0:10: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatically remap LUN assignments.</div><div class="">scsi 1:0:2:1: Direct-Access 3PARdata VV 3210 PQ: 0 ANSI: 6</div><div class="">sd 1:0:2:1: Attached scsi generic sg4 type 0</div><div class="">sd 1:0:2:1: [sdb] 4294967296 512-byte logical blocks: (2.19 TB/2.00 TiB)</div><div class="">scsi 1:0:3:1: Direct-Access 3PARdata VV 3210 PQ: 0 ANSI: 6</div><div class="">sd 1:0:3:1: Attached scsi generic sg9 type 0</div><div class="">sd 1:0:3:1: [sdf] 4294967296 512-byte logical blocks: (2.19 TB/2.00 TiB)</div><div class="">sd 1:0:2:1: [sdb] Write Protect is off</div><div class="">sd 1:0:2:1: [sdb] Mode Sense: 8b 00 10 08</div><div class="">sd 1:0:2:1: [sdb] Write cache: disabled, read cache: enabled, supports DPO and FUA</div><div class="">sd 1:0:3:1: [sdf] Write Protect is off</div><div class="">sd 1:0:3:1: [sdf] Mode Sense: 8b 00 10 08</div><div class="">sd 1:0:3:1: [sdf] Write cache: disabled, read cache: enabled, supports DPO and FUA</div><div class=""> sdb: unknown partition table</div><div class=""> sdf: unknown partition table</div><div class="">sd 1:0:2:1: [sdb] Attached SCSI disk</div><div class="">sd 1:0:3:1: [sdf] Attached SCSI disk</div><div class="">scsi 2:0:5:1: Direct-Access 3PARdata VV 3210 PQ: 0 ANSI: 6</div><div class="">sd 2:0:5:1: Attached scsi generic sg16 type 0</div><div class="">sd 2:0:5:1: [sdj] 4294967296 512-byte logical blocks: (2.19 TB/2.00 TiB)</div><div class="">scsi 2:0:0:1: Direct-Access 3PARdata VV 3210 PQ: 0 ANSI: 6</div><div class="">sd 2:0:0:1: Attached scsi generic sg25 type 0</div><div class="">sd 2:0:0:1: [sdn] 4294967296 512-byte logical blocks: (2.19 TB/2.00 TiB)</div><div class="">sd 2:0:5:1: [sdj] Write Protect is off</div><div class="">sd 2:0:5:1: [sdj] Mode Sense: 8b 00 10 08</div><div class="">sd 2:0:5:1: [sdj] Write cache: disabled, read cache: enabled, supports DPO and FUA</div><div class="">sd 2:0:0:1: [sdn] Write Protect is off</div><div class="">sd 2:0:0:1: [sdn] Mode Sense: 8b 00 10 08</div><div class="">sd 2:0:0:1: [sdn] Write cache: disabled, read cache: enabled, supports DPO and FUA</div><div class=""> sdj: unknown partition table</div><div class=""> sdn: unknown partition table</div><div class="">sd 2:0:5:1: [sdj] Attached SCSI disk</div><div class="">sd 2:0:0:1: [sdn] Attached SCSI disk</div><div class="">XFS (dm-8): Mounting Filesystem</div><div class="">XFS (dm-8): Ending clean mount</div><div class="">sd 1:0:2:10: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatically remap LUN assignments.</div><div class="">sd 1:0:3:10: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatically remap LUN assignments.</div><div class="">sd 2:0:5:10: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatically remap LUN assignments.</div><div class="">sd 2:0:0:10: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatically remap LUN assignments.</div><div class=""> rport-2:0-17: blocked FC remote port time out: removing rport</div><div class=""> rport-2:0-2: blocked FC remote port time out: removing rport</div></div></div><div class=""><br class=""><div><blockquote type="cite" class=""><div class="">On Dec 22, 2014, at 3:48 PM, Dave Chinner <<a href="mailto:david@fromorbit.com" class="">david@fromorbit.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class="">On Fri, Dec 19, 2014 at 09:26:12PM +0000, Weber, Charles (NIH/NIA/IRP) [E] wrote:<br class=""><blockquote type="cite" class="">HI everyone, long time xfs/quota user with new server and problem<br class="">hardware is HP BL460 G7 blade, qlogic fiber channel and 3Par 7200 storage<br class="">3 16TB vols exported from 3Par to server via FC. These are thin volumes, but plenty of available backing storage.<br class=""><br class="">Server runs current patched CentOS 6.6<br class="">kernel 2.6.32-504.3.3.el6.x86_64<br class="">xfsprogs 2.1.1-16.el6<br class="">Default mkfs.xfs options for volumes<br class=""><br class="">mount options for logical volumes home_lv 39TB imap_lv 4.6TB<br class="">/dev/mapper/irphome_vg-home_lv on /home type xfs (rw,delaylog,inode64,nobarrier,logbsize=256k,uquota,prjquota)<br class="">/dev/mapper/irphome_vg-imap_lv on /mail type xfs (rw,delaylog,inode64,nobarrier,logbsize=256k,uquota,prjquota)<br class=""><br class="">Users are from large AD via winbind set to not enumerate. I saw<br class="">the bug with xfs_quota report not listing winbind defined user<br class="">names. Yes this happens to me.<br class=""></blockquote><br class="">So just enumerate them by uid. (report -un)<br class=""><br class=""><blockquote type="cite" class="">I can assign project quota to smaller volume. xfs_quota will not<br class="">report it. I cannot assign a project quota to larger volume. I get<br class="">this error: xfs_quota: cannot set limits: Function not<br class="">implemented.<br class=""></blockquote><br class="">You need to be more specific and document all your quota setup.<br class=""><br class=""><a href="http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F" class="">http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F</a><br class=""><br class=""><blockquote type="cite" class="">xfs_quota -x -c 'report -uh' /mail<br class="">User quota on /mail (/dev/mapper/irphome_vg-imap_lv)<br class=""> Blocks<br class="">User ID Used Soft Hard Warn/Grace<br class="">---------- ---------------------------------<br class="">root 2.2G 0 0 00 [------]<br class=""><br class="">[xfs_quota -x -c 'report -uh' /home<br class=""><br class="">nothing is returned<br class=""><br class="">I can set user and project quotas on /mail but cannot see them. I have not tested them yet.<br class="">I cannot set user or project quotas on /home.<br class="">At one time I could definitely set usr quotas on /home. I did so and verified it worked.<br class=""><br class="">Any ideas what is messed up on the /home volume?<br class=""></blockquote><br class="">Not without knowing a bunch more about your project quota setup.<br class=""><br class="">Cheers,<br class=""><br class="">Dave.<br class="">-- <br class="">Dave Chinner<br class="">david@fromorbit.com<br class=""></div></blockquote></div><br class=""></div></div></body></html>