| To: | xfs@xxxxxxxxxxx |
|---|---|
| Subject: | Looking to confirm issue and seek advice on fix for inode btree fragmentation |
| From: | Pippin Wallace <nippip@xxxxxxxxx> |
| Date: | Thu, 13 Aug 2015 17:35:31 -0600 |
| Delivered-to: | xfs@xxxxxxxxxxx |
| Dkim-signature: | v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=2WbblCiBVlcP+5UVBKNO7hH6nqdtk9nN70T9lm1BUCY=; b=hdWV49Qbdi6OBfJJMDAVlMBv7E826hy+JmBsECBrno9jaK+JYpzalQo3ScLD1XUTUE nDmyNedfMaX17I7Ft3lmTX+v7l5HhgIJovB6n4fiMQxfgSWIKNKMfXZL43F1tm0dImkX B4TilRzZ+d5gyVKW7DuwQSptJao9wx5hTxnEVilPUIakFtwSYAaT9rh0ZOdcbJRaZrin W5Wz1QfFizbJvnIF9+WXbeTxsqvUrFLrTvWh+bL+rv5LXpwSNRxVHJqpFUIxcRvCQRXT BfYRbPl7K6FRsyWxsPVEjCZ+EMra2MTJZ+DdK7pwqt05Rc1v7mh+De+3cleBnSTNEQgD 6/Gw== |
|
 I am writing to confirm the issue I think I have and get advice on the best short term and long term âfixesâ.  In reading the following threads and others I am pretty sure I have inode btree fragmentation and I have includedÂdetailedÂoutputÂbelow to help confirm this.  Is XFS suitable for 350 million files on 20TB storage? http://oss.sgi.com/archives/xfs/2014-09/msg00046.html  Bad performance on touch/cp file on XFS system http://oss.sgi.com/archives/xfs/2014-08/msg00348.html  Extremely slow file creation/deletion after xfs ran full http://oss.sgi.com/archives/xfs/2015-01/msg00203.html   In addition to diagnosis I want to insure I take the right steps to improve performance in the short term and fix it correctly and thoroughly for the long term.  Our workload is heavy write/delete and light reads. Most files are smallish and we cannot mount xfs with inode64 as our primary app is 32 bit. Our hosts are running as VMâs on a KVM hypervisor.  Suggestion from Dave in post Âhttp://oss.sgi.com/archives/xfs/2014-08/msg00349.html If this is still the best short term fix would you please direct me to the âHow Toâ for doing this? âfilling in all the holes (by creating a bunch of zero length files in the appropriate AGs) might take some time, but it should make the problem go away until you remove more filesystem and create random free inode holes again...â  Is this also a possible short term fix? http://osvault.blogspot.com/2011/03/fixing-1tbyte-inode-problem-in-xfs-file.html   Long term fix is upgrade kernel to >= 3.16, xfsprogs >=3.2.1 and rebuild fs with new finobt structure.  # mkfs.xfs -m crc=1,finobt=1 <dev>    Here is info others have asked for in previous threads and further below that is data collected based on âWhat information should I include when reporting a problem?â from your FAQ.  If a command line in bold below starts with a VM then this command was run on the Virtual Machine or guest OS and if it starts with a HV then it was run on the physical HyperVisor machine that is hosting the guest OS.   VM# perf top Samples: 378K of event 'cpu-clock', Event count (approx.): 188377 14.10% [kernel] [k] xfs_inobt_get_rec  9.58% [kernel] [k] xfs_btree_increment  9.08% [kernel] [k] xfs_btree_get_rec  8.83% [kernel] ÂÂÂÂÂ[k] _xfs_buf_find  5.84% [kernel] [k] xfs_btree_get_block  3.91% [kernel] [k] xfs_btree_rec_offset  3.85% [kernel] [k] xfs_dialloc_ag  3.64% [kernel] [k] xfs_btree_readahead  3.15% [kernel] [k] _raw_spin_unlock_irqrestore  2.97% [kernel] [k] xfs_btree_rec_addr  2.31% [kernel] [k] xfs_trans_buf_item_match  1.67% [kernel] [k] xfs_btree_setbuf   VM# time xfs_db -r -c "frag" /dev/vdc1 actual 80103457, ideal 79500897, fragmentation factor 0.75% real 53m19.167s user 0m35.050s sys 3m35.390s   VM# xfs_db -r -c "frag -d" /dev/vdc1 actual 8947229, ideal 8406330, fragmentation factor 6.05%   VM# df -i /dev/vdc1 Filesystem Inodes IUsed IFree IUse% Mounted on /dev/vdc1 6,442,426,368 152,347,297 6,290,079,071 3% /exports   VM# for i in `seq 0 $((16-1))`; do echo "freespace in ag$i"; xfs_db -r -c "freesp -s -a $i" /dev/vdc1 | grep "total free" | sed 's/^/ /g'; done freespace in ag0 total free extents 611688 total free blocks 45590165 freespace in ag1 total free extents 606085 total free blocks 45576240 freespace in ag2 total free extents 606482 total free blocks 45571903 freespace in ag3 total free extents 604576 total free blocks 45572111 freespace in ag4 total free extents 74367 total free blocks 15129234 freespace in ag5 total free extents 172955 total free blocks 17015698 freespace in ag6 total free extents 176759 total free blocks 14716774 freespace in ag7 total free extents 170481 total free blocks 15031152 freespace in ag8 total free extents 173346 total free blocks 15677354 freespace in ag9 total free extents 177170 total free blocks 14914037 freespace in ag10 total free extents 169536 total free blocks 16267638 freespace in ag11 total free extents 173325 total free blocks 14618827 freespace in ag12 total free extents 174743 total free blocks 14634872 freespace in ag13 total free extents 173378 total free blocks 16412626 freespace in ag14 total free extents 171388 total free blocks 15198251 freespace in ag15 total free extents 170986 total free blocks 13790690   âWhat information should I include when reporting a problem?â   VM# uname -a Linux 3.8.13-55.1.2.el6uek.x86_64 #2 SMP Thu Dec 18 00:15:51 PST 2014 x86_64 x86_64 x86_64 GNU/Linux  HV# uname -a Linux 2.6.32-431.el6.x86_64 #1 SMP Wed Nov 20 23:56:07 PST 2013 x86_64 x86_64 x86_64 GNU/Linux   VM# xfs_repair -V xfs_repair version 3.1.11   VM# dmidecode |grep -A3 Product  Product Name: KVM  Version: RHEL 6.5.0 PC   HV# dmidecode |egrep -A4 "Product|Processor"  Product Name: SUN SERVER X4-2L Processor Information  Socket Designation: P0  Family: Xeon  Signature: Type 0, Family 6, Model 62, Stepping 4 Processor Information  Socket Designation: P1  Family: Xeon  Signature: Type 0, Family 6, Model 62, Stepping 4   VM# cat /proc/cpuinfo (Total of 16 virtual cpuâs all with the same settings as cpu 0 below.) processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 13 model name : QEMU Virtual CPU version (cpu64-rhel6) stepping : 3 microcode : 0x1 cpu MHz : 2693.508 cache size : 4096 KB fpu : yes fpu_exception : yes cpuid level : 4 wp : yes flags : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 syscall nx lm nopl pni cx16 hypervisor lahf_lm bogomips : 5387.01 clflush size : 64 cache_alignment : 64 address sizes : 46 bits physical, 48 bits virtual power management:   VM# cat /proc/meminfo MemTotal: 49463428 kB MemFree: 931832 kB Buffers: 81504 kB Cached: 16499856 kB SwapCached: 0 kB Active: 21534272 kB Inactive: 10518940 kB Active(anon): 13615056 kB Inactive(anon): 1859060 kB Active(file): 7919216 kB Inactive(file): 8659880 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 2096124 kB SwapFree: 2096124 kB Dirty: 117192 kB Writeback: 0 kB AnonPages: 15468324 kB Mapped: 39484 kB Shmem: 3888 kB Slab: 7883016 kB SReclaimable: 6149288 kB SUnreclaim: 1733728 kB KernelStack: 11000 kB PageTables: 180756 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 26827836 kB Committed_AS: 21133332 kB VmallocTotal: 34359738367 kB VmallocUsed: 112668 kB VmallocChunk: 34359623568 kB HardwareCorrupted: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 10224 kB DirectMap2M: 50321408 kB   VM# cat /proc/mounts rootfs / rootfs rw 0 0 proc /proc proc rw,relatime 0 0 sysfs /sys sysfs rw,relatime 0 0 devtmpfs /dev devtmpfs rw,relatime,size=24719668k,nr_inodes=6179917,mode=755 0 0 devpts /dev/pts devpts rw,relatime,gid=5,mode=620,ptmxmode=000 0 0 tmpfs /dev/shm tmpfs rw,relatime 0 0 /dev/vda3 / ext3 rw,relatime,errors=continue,user_xattr,acl,barrier=1,data="" 0 0 proc /rhel3/proc proc rw,relatime 0 0 /dev/vda5 /tmp ext3 rw,relatime,errors=continue,user_xattr,acl,barrier=1,data="" 0 0 /dev/vda1 /boot ext3 rw,relatime,errors=continue,user_xattr,acl,barrier=1,data="" 0 0 /dev/vdc1 /exports xfs rw,noatime,attr2,inode32,nobarrier,logbsize=256k,sunit=512,swidth=11264,noquota 0 0 /dev/vdb1 /mnt xfs rw,noatime,attr2,inode32,nobarrier,logbsize=256k,sunit=512,swidth=11264,noquota 0 0 /dev/vdc1 /www xfs rw,noatime,attr2,inode32,nobarrier,logbsize=256k,sunit=512,swidth=11264,noquota 0 0 /dev/vdc1 /rhel3/www xfs rw,noatime,attr2,inode32,nobarrier,logbsize=256k,sunit=512,swidth=11264,noquota 0 0 /dev/vdc1 /chroot xfs rw,noatime,attr2,inode32,nobarrier,logbsize=256k,sunit=512,swidth=11264,noquota 0 0 /dev/vdc1 /home xfs rw,noatime,attr2,inode32,nobarrier,logbsize=256k,sunit=512,swidth=11264,noquota 0 0 /dev/vdc1 /nfscommon xfs rw,noatime,attr2,inode32,nobarrier,logbsize=256k,sunit=512,swidth=11264,noquota 0 0 /dev/vdc1 /opt xfs rw,noatime,attr2,inode32,nobarrier,logbsize=256k,sunit=512,swidth=11264,noquota 0 0 /dev/vdc1 /scratch xfs rw,noatime,attr2,inode32,nobarrier,logbsize=256k,sunit=512,swidth=11264,noquota 0 0 none /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0 sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0 nfsd /proc/fs/nfsd nfsd rw,relatime 0 0 10.140.110.106:/export/nfs/qa /nfs/qa nfs rw,noatime,nodiratime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,nocto,noacl,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.140.110.106,mountvers=3,mountport=36653,mountproto=udp,local_lock=none,addr=10.140.110.106 0 0 10.140.110.106:/export/nfs/install /nfs/install nfs rw,noatime,nodiratime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,nocto,noacl,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.140.110.106,mountvers=3,mountport=36653,mountproto=udp,local_lock=none,addr=10.140.110.106 0 0 10.140.110.105:/export/home /nfs/users nfs rw,noatime,nodiratime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,nocto,noacl,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.140.110.105,mountvers=3,mountport=37692,mountproto=udp,local_lock=none,addr=10.140.110.105 0 0 10.140.110.105:/export/nfs/project /nfs/project nfs rw,noatime,nodiratime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,nocto,noacl,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.140.110.105,mountvers=3,mountport=37692,mountproto=udp,local_lock=none,addr=10.140.110.105 0 0 10.140.110.105:/export/nfs/data /nfs/data nfs rw,noatime,nodiratime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,nocto,noacl,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.140.110.105,mountvers=3,mountport=37692,mountproto=udp,local_lock=none,addr=10.140.110.105 0 0 none /sys/kernel/debug debugfs rw,relatime 0 0   VM# cat /proc/partitions major minor #blocks name  251 0 104857600 vda 251 1 204800 vda1 251 ÂÂÂ2 2096128 vda2 251 3 16595968 vda3 251 4 1 vda4 251 5 85958656 vda5 251 16 6442450944 vdb 251 17 6442434560 vdb1 251 32 6442450944 vdc 251 33 6442426368 vdc1   VM# xfs_info /dev/vdc1 meta-data="" isize=256 agcount=32, agsize=50331456 blks  = sectsz=512 attr=2, projid32bit=1  = crc=0 data = bsize=4096 blocks=1610606592, imaxpct=25  ÂÂÂÂ= sunit=64 swidth=1408 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal bsize=4096 blocks=521728, version=2  = sectsz=512 sunit=64 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0   VM# iostat -x -d -m 5 Linux 3.8.13-55.1.2.el6uek.x86_64 (hmswebdv01.int.dv.lan) 08/13/2015 _x86_64_  Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util vda 0.13 23.96 4.69 9.77 0.08 0.13 29.83 0.17 11.64 2.64 3.81 vda1 0.00 0.01 0.00 0.00 0.00 0.00 86.76 0.00 7.73 4.22 0.00 vda2 ÂÂÂÂÂÂÂÂ0.00 0.00 0.00 0.00 0.00 0.00 14.68 0.00 2.08 1.81 0.00 vda3 0.13 19.58 4.65 8.29 0.08 0.11 29.68 0.15 11.36 2.79 3.60 vda4 0.00 0.00 0.00 0.00 0.00 0.00 7.67 0.00 29.50 29.50 0.00 vda5 0.01 4.37 0.04 1.44 0.00 0.02 32.06 0.02 14.22 2.41 0.35 vdb 0.00 0.00 36.47 0.13 0.28 0.04 17.96 0.11 3.09 2.87 10.50 vdb1 0.00 0.00 36.47 0.13 0.28 0.04 17.96 0.11 3.09 2.87 10.50 vdc 56.10 85.36 447.04 461.48 9.21 10.85 45.20 1.29 1.42 0.99 89.77 vdc1 56.10 85.36 447.04 461.48 9.21 10.85 45.20 1.28 1.42 0.99 89.77  Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util vda 0.00 13.20 1.40 3.00 0.01 0.06 33.45 0.01 2.91 2.32 1.02 vda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vda3 Â0.00 10.00 1.40 2.60 0.01 0.05 29.60 0.01 3.20 2.55 1.02 vda4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vda5 0.00 3.20 0.00 0.40 0.00 0.01 72.00 0.00 0.00 0.00 0.00 vdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vdb1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vdc 0.00 0.00 1222.80 60.60 45.01 5.81 81.10 7.49 5.79 0.78 100.00 vdc1 0.00 0.00 1222.80 60.60 45.01 5.81 81.10 7.49 5.79 0.78 100.00  Device: rrqm/s wrqm/s r/s ÂÂw/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util vda 0.00 10.20 1.20 17.60 0.04 0.11 16.51 0.06 2.96 0.66 1.24 vda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Â0.00 0.00 0.00 vda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vda3 0.00 9.40 1.20 13.20 0.04 0.09 18.78 0.05 3.57 0.86 1.24 vda4 0.00 ÂÂÂ0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vda5 0.00 0.80 0.00 4.20 0.00 0.02 9.52 0.00 1.00 0.05 0.02 vdb 0.00 0.00 0.00 0.00 0.00 0.00 ÂÂ0.00 0.00 0.00 0.00 0.00 vdb1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vdc 0.00 0.40 928.00 432.40 23.05 13.05 54.35 9.65 7.14 0.74 100.00 vdc1 ÂÂÂÂÂÂÂÂÂÂÂÂ0.00 0.40 928.00 432.40 23.05 13.05 54.35 9.65 7.14 0.74 100.00  Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util vda 0.00 7.60 0.00 2.40 ÂÂÂ0.00 0.04 32.00 0.10 41.67 16.33 3.92 vda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Â0.00 0.00 vda3 0.00 7.60 0.00 2.00 0.00 0.04 38.40 0.09 46.60 16.20 3.24 vda4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vda5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vdb1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ÂÂÂÂ0.00 0.00 0.00 0.00 vdc 0.00 293.20 352.20 1753.00 8.38 43.34 50.32 71.98 33.91 0.48 100.00 vdc1 0.00 293.20 352.20 1753.00 8.38 43.34 50.32 71.98 33.91 0.48 100.00  Device: ÂÂÂÂrrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util vda 0.00 15.00 1.20 6.40 0.04 0.08 32.00 0.42 55.29 23.24 17.66 vda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vda3 0.00 14.40 1.20 5.80 0.04 0.08 33.60 0.39 55.77 20.97 14.68 vda4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vda5 0.00 0.60 0.00 0.40 0.00 0.00 20.00 0.02 60.50 60.50 2.42 vdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vdb1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vdc 0.20 676.20 20.40 3131.60 0.20 26.83 17.56 144.45 45.74 0.32 100.00 vdc1 0.20 676.20 20.40 3131.60 0.20 26.83 17.56 144.45 45.74 0.32 100.00  Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util vda 0.00 3.60 2.80 2.60 0.02 0.02 17.78 0.48 89.63 52.41 28.30 vda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vda3 0.00 3.60 2.80 2.60 0.02 0.02 17.78 0.48 89.63 52.41 28.30 vda4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vda5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vdb1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vdc 1.40 518.00 51.80 3012.00 0.62 25.65 17.56 149.01 48.58 0.33 100.00 vdc1 1.40 518.00 51.80 3012.00 0.62 25.65 17.56 149.01 48.58 0.33 100.00  Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util vda 0.00 2.60 0.00 2.20 0.00 0.02 16.73 0.05 22.73 19.91 4.38 vda1 0.00 Â0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vda3 0.00 2.60 0.00 2.00 0.00 0.02 18.40 0.02 9.60 6.50 1.30 vda4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vda5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vdb ÂÂÂÂÂÂÂÂÂÂÂ0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vdb1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vdc 3.40 62.00 65.40 1936.20 Â1.84 9.35 11.45 153.92 76.42 0.50 100.00 vdc1 3.40 62.00 65.40 1936.20 1.84 9.35 11.45 153.92 76.42 0.50 100.00  Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util vda 0.00 3.80 0.20 2.80 0.00 0.02 17.07 0.17 55.47 30.87 9.26 vda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vda2 0.00 0.00 ÂÂÂ0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vda3 0.00 3.80 0.20 2.40 0.00 0.02 19.69 0.15 58.69 30.31 7.88 vda4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ÂÂÂ0.00 0.00 0.00 0.00 vda5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vdb1 ÂÂÂÂ0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vdc 1.00 224.20 379.20 1752.60 5.32 14.94 19.46 100.70 47.71 0.47 100.00 vdc1 1.00 224.20 379.20 1752.60 5.32 ÂÂ14.94 19.46 100.70 47.71 0.47 100.00  Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util vda 0.00 15.00 3.20 7.00 0.04 0.09 25.25 0.04 4.29 2.49 2.54 vda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vda3 0.00 15.00 3.20 6.80 0.04 0.09 25.76 0.04 4.38 2.54 2.54 vda4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vda5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vdb1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vdc 0.40 45.00 390.60 1060.60 8.16 8.61 23.66 80.59 56.24 0.68 99.20 vdc1 0.40 45.00 390.60 1060.60 8.16 8.61 23.66 80.59 56.24 0.68 99.20  Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util vda 0.00 91.40 3.00 48.20 0.03 0.55 23.00 0.11 2.20 0.33 1.70 vda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vda3 0.00 66.00 3.00 38.40 0.03 0.41 21.64 0.10 2.34 0.41 1.68 vda4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vda5 0.00 25.40 0.00 9.80 0.00 0.14 28.73 0.02 1.61 0.08 0.08 vdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vdb1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vdc 0.00 0.20 863.60 191.60 30.72 13.77 86.35 6.17 5.85 0.95 99.86 vdc1 0.00 ÂÂ0.20 863.60 191.60 30.72 13.77 86.35 6.17 5.85 0.95 99.86  Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util vda 0.00 32.20 0.00 13.80 0.00 0.18 Â26.55 1.20 86.84 6.93 9.56 vda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vda3 ÂÂÂÂÂÂÂÂÂÂÂÂ0.00 32.20 0.00 13.60 0.00 0.18 26.94 1.20 88.10 7.01 9.54 vda4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vda5 0.00 0.00 0.00 0.00 ÂÂ0.00 0.00 0.00 0.00 0.00 0.00 0.00 vdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vdb1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vdc 0.00 0.60 791.00 359.00 16.53 36.44 94.34 25.17 21.87 0.87 100.00 vdc1 0.00 0.60 791.00 359.00 16.53 36.44 94.34 25.17 21.87 0.87 100.00  Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util vda 0.00 16.80 0.80 23.00 0.00 0.15 13.58 0.03 1.08 0.24 0.58 vda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ÂÂÂÂ0.00 0.00 0.00 0.00 vda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vda3 0.00 11.00 0.80 17.20 0.00 0.11 12.98 0.02 1.26 0.31 0.56 vda4 ÂÂÂÂÂ0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vda5 0.00 5.80 0.00 5.40 0.00 0.04 16.59 0.00 0.59 0.07 0.04 vdb 0.00 0.00 0.00 0.00 0.00 ÂÂÂÂ0.00 0.00 0.00 0.00 0.00 0.00 vdb1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vdc 0.00 0.00 1143.40 12.40 40.78 3.05 77.67 6.23 5.39 0.87 100.02 vdc1 0.00 0.00 1143.40 12.40 40.78 3.05 77.67 6.23 5.39 0.87 100.02   VM# vmstat 5 12 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 6 3 0 1022148 77808 16266636 0 0 618 712 0 0 4 10 80 6 0 4 1 0 1086272 77384 16232348 0 0 24294 39514 19557 15659 2 24 54 20 0 0 4 0 1079320 77324 16239100 0 0 4667 25202 8772 11317 1 5 74 20 0 3 3 0 1094136 77308 16234064 0 0 1102 27323 8101 12963 3 4 79 14 0 5 10 0 1178908 77296 16156332 0 0 1370 18625 9776 11009 1 8 64 27 0 4 5 0 1387024 77020 15991676 0 0 10094 3260 19037 14185 1 23 61 14 0 4 3 0 1300224 77028 15920712 0 0 3219 20062 17265 14902 7 18 63 12 0 1 11 0 1522772 77064 15796712 0 0 1095 23720 13740 15666 7 11 50 32 0 6 1 0 1557288 77172 15810492 0 0 4626 31199 16900 14040 6 18 53 23 0 5 2 0 1529284 77204 15850112 0 0 9396 6547 17183 13949 2 21 57 20 0 3 1 0 1480028 77212 15881516 0 0 10033 3002 20339 14766 1 26 64 9 0 6 2 0 1447844 77228 15915300 0 0 10855 4226 19170 14180 3 23 65 9 0   HV# /opt/MegaRAID/MegaCli/MegaCli64 -LdPdInfo -a0  Adapter #0  Number of Virtual Disks: 2 Virtual Drive: 0 (Target Id: 0) Name : RAID Level : Primary-6, Secondary-0, RAID Level Qualifier-3 Size : 45.998 GB Parity Size : 4.181 GB State : Optimal Strip Size : 256 KB Number Of Drives : 24 Span Depth : 1 Default Cache Policy: WriteBack, ReadAhead, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteBack, ReadAhead, Direct, No Write Cache if Bad BBU Default Access Policy: Read/Write Current Access Policy: Read/Write Disk Cache Policy : Disabled Encryption Type : None Is VD Cached: No Number of Spans: 1 Span: 0 - Number of PDs: 24  PD: 0 Information Enclosure Device ID: 34 Slot Number: 0 Drive's postion: DiskGroup: 0, Span: 0, Arm: 0 Enclosure position: 0 Device Id: 35 WWN: 5000CCA01D350CC3 Sequence Number: 2 Media Error Count: 0 Other Error Count: 0 Predictive Failure Count: 0 Last Predictive Failure Event Seq Number: 0 PD Type: SAS Raw Size: 1.090 TB [0x8bba0cb0 Sectors] Non Coerced Size: 1.090 TB [0x8baa0cb0 Sectors] Coerced Size: 1.089 TB [0x8b94f800 Sectors] Firmware state: Online, Spun Up Is Commissioned Spare : NO Device Firmware Level: A600 Shield Counter: 0 Successful diagnostics completion on : N/A SAS Address(0): 0x5000cca01d350cc1 SAS Address(1): 0x0 Connected Port Number: 0(path0) Inquiry Data: HGST H101212SESUN1.2TA6001404DY5EPE FDE Enable: Disable Secured: Unsecured Locked: Unlocked Needs EKM Attention: No Foreign State: None Device Speed: 6.0Gb/s Link Speed: 6.0Gb/s Media Type: Hard Disk Device Drive Temperature :26C (78.80 F) PI Eligibility: No Drive is formatted for PI information: No PI: No PI Drive's write cache : Disabled Port-0 : Port status: Active Port's Linkspeed: 6.0Gb/s Port-1 : Port status: Active Port's Linkspeed: Unknown Drive has flagged a S.M.A.R.T alert : No   KERNEL OPTIONS: Which of these kernels should I choose for the best XFS support for this issue?  Â kernel-3.16.7-3.16.y.20141114.ol6.x86_64.rpm 14-Nov-2014 17:55 27.7 M  kernel-3.17.3-3.17.y.20141114.ol6.x86_64.rpm 14-Nov-2014 23:55 26.6 M  kernel-3.17.3-3.17.y.20141118.ol6.x86_64.rpm 18-Nov-2014 09:13 26.6 M  kernel-3.17.8-3.17.y.20150113.ol6.x86_64.rpm 14-Jan-2015 06:50 27.9 M  kernel-3.18.6-3.18.y.20150210.ol6.x86_64.rpm 10-Feb-2015 09:52 30.9 M  kernel-3.18.7-3.18.y.20150217.ol6.x86_64.rpm 17-Feb-2015 09:53 30.9 M  kernel-3.18.8-3.18.y.20150303.ol6.x86_64.rpm 03-Mar-2015 09:55 30.9 M  kernel-3.19.5-3.19.y.20150421.ol6.x86_64.rpm 22-Apr-2015 05:42 31.1 M  kernel-3.19.6-3.19.y.20150505.ol6.x86_64.rpm 06-May-2015 05:41 29.8 M  kernel-3.19.8-3.19.y.20150512.ol6.x86_64.rpm 13-May-2015 05:42 29.8 M  Thank you for your time and comments. Regards, Pippin Wallace Bozeman Montana    |
| <Prev in Thread] | Current Thread | [Next in Thread> |
|---|---|---|
| ||
| Previous by Date: | Re: generic/04[89] fail on XFS due to change in writeback code [4.2-rc1 regression], Tejun Heo |
|---|---|
| Next by Date: | Re: [PATCH][RFC] xfs_copy: don't use DIRECT IO to copy 4k sector device, Dave Chinner |
| Previous by Thread: | [PATCH][RFC] xfs_copy: don't use DIRECT IO to copy 4k sector device, Zorro Lang |
| Next by Thread: | Re: Looking to confirm issue and seek advice on fix for inode btree fragmentation, Dave Chinner |
| Indexes: | [Date] [Thread] [Top] [All Lists] |