<div dir="ltr"><div class="markdown-here-wrapper" style=""><p style="margin:0px 0px 1.2em!important">Hi Brain, </p>
<p style="margin:0px 0px 1.2em!important">There you go. </p>
<p style="margin:0px 0px 1.2em!important"></p><div class="markdown-here-exclude"><p></p><blockquote style="margin:0 0 0 40px;border:none;padding:0px"><div><div><a href="https://cloud.swiftstack.com/v1/AUTH_hugo/public/vmlinux">https://cloud.swiftstack.com/v1/AUTH_hugo/public/vmlinux</a></div></div><div><div><a href="https://cloud.swiftstack.com/v1/AUTH_hugo/public/System.map-2.6.32-504.23.4.el6.x86_64">https://cloud.swiftstack.com/v1/AUTH_hugo/public/System.map-2.6.32-504.23.4.el6.x86_64</a></div></div></blockquote><p></p></div><p style="margin:0px 0px 1.2em!important"></p>
<pre style="font-size:0.85em;font-family:Consolas,Inconsolata,Courier,monospace;font-size:1em;line-height:1.2em;margin:1.2em 0px"><code style="font-size:0.85em;font-family:Consolas,Inconsolata,Courier,monospace;margin:0px 0.15em;padding:0px 0.3em;white-space:pre-wrap;border:1px solid rgb(234,234,234);border-radius:3px;display:inline;background-color:rgb(248,248,248);white-space:pre;overflow:auto;border-radius:3px;border:1px solid rgb(204,204,204);padding:0.5em 0.7em;display:block!important">$ md5sum vmlinux
82aaa694a174c0a29e78c05e73adf5d8 vmlinux
</code></pre><p style="margin:0px 0px 1.2em!important">Yes, I can read it with this vmlinux image. Put all files (vmcore,vmlinux,System.map) in a folder and run <code style="font-size:0.85em;font-family:Consolas,Inconsolata,Courier,monospace;margin:0px 0.15em;padding:0px 0.3em;white-space:pre-wrap;border:1px solid rgb(234,234,234);border-radius:3px;display:inline;background-color:rgb(248,248,248)">$crash vmlinux vmcore</code></p>
<p style="margin:0px 0px 1.2em!important">Hugo </p>
<div title="MDH:SGkgQnJhaW4swqA8ZGl2Pjxicj48L2Rpdj48ZGl2PlRoZXJlIHlvdSBnby4mbmJzcDs8L2Rpdj48
ZGl2Pjxicj48L2Rpdj48YmxvY2txdW90ZSBzdHlsZT0ibWFyZ2luOiAwIDAgMCA0MHB4OyBib3Jk
ZXI6IG5vbmU7IHBhZGRpbmc6IDBweDsiPjxkaXY+PGRpdj5odHRwczovL2Nsb3VkLnN3aWZ0c3Rh
Y2suY29tL3YxL0FVVEhfaHVnby9wdWJsaWMvdm1saW51eDwvZGl2PjwvZGl2PjxkaXY+PGRpdj5o
dHRwczovL2Nsb3VkLnN3aWZ0c3RhY2suY29tL3YxL0FVVEhfaHVnby9wdWJsaWMvU3lzdGVtLm1h
cC0yLjYuMzItNTA0LjIzLjQuZWw2Lng4Nl82NDwvZGl2PjwvZGl2PjwvYmxvY2txdW90ZT48ZGl2
Pjxicj48L2Rpdj48ZGl2PmBgYDwvZGl2PjxkaXY+PGRpdj4kIG1kNXN1bSB2bWxpbnV4PC9kaXY+
PGRpdj44MmFhYTY5NGExNzRjMGEyOWU3OGMwNWU3M2FkZjVkOCAmbmJzcDt2bWxpbnV4PC9kaXY+
PC9kaXY+PGRpdj5gYGA8L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PlllcywgSSBjYW4gcmVhZCBp
dCB3aXRoIHRoaXMgdm1saW51eCBpbWFnZS4gUHV0IGFsbCBmaWxlcyAodm1jb3JlLHZtbGludXgs
U3lzdGVtLm1hcCkgaW4gYSBmb2xkZXIgYW5kIHJ1biBgJGNyYXNoIHZtbGludXggdm1jb3JlYDwv
ZGl2PjxkaXY+PGJyPjwvZGl2PjxkaXY+SHVnbyZuYnNwOzwvZGl2Pg==" style="height:0;width:0;max-height:0;max-width:0;overflow:hidden;font-size:0em;padding:0;margin:0"></div></div></div><div class="gmail_extra"><br><div class="gmail_quote">2015-07-09 23:18 GMT+08:00 Brian Foster <span dir="ltr"><<a href="mailto:bfoster@redhat.com" target="_blank">bfoster@redhat.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On Thu, Jul 09, 2015 at 09:20:00PM +0800, Kuo Hugo wrote:<br>
> Hi Brian,<br>
><br>
> *Operating System Version:*<br>
> Linux-2.6.32-504.23.4.el6.x86_64-x86_64-with-centos-6.6-Final<br>
><br>
> *NODE 1*<br>
><br>
> <a href="https://cloud.swiftstack.com/v1/AUTH_burton/brtnswift/vmcore" rel="noreferrer" target="_blank">https://cloud.swiftstack.com/v1/AUTH_burton/brtnswift/vmcore</a><br>
> <a href="https://cloud.swiftstack.com/v1/AUTH_burton/brtnswift/vmcore-dmesg.txt" rel="noreferrer" target="_blank">https://cloud.swiftstack.com/v1/AUTH_burton/brtnswift/vmcore-dmesg.txt</a><br>
><br>
><br>
> *NODE 2*<br>
<span class="">><br>
> <a href="https://cloud.swiftstack.com/v1/AUTH_burton/brtnswift/vmcore_r2obj02" rel="noreferrer" target="_blank">https://cloud.swiftstack.com/v1/AUTH_burton/brtnswift/vmcore_r2obj02</a><br>
> <a href="https://cloud.swiftstack.com/v1/AUTH_burton/brtnswift/vmcore-dmesg_r2obj02.txt" rel="noreferrer" target="_blank">https://cloud.swiftstack.com/v1/AUTH_burton/brtnswift/vmcore-dmesg_r2obj02.txt</a><br>
><br>
><br>
> Any thoughts would be appreciate<br>
><br>
<br>
</span>I'm not able to fire up crash with these core files and the kernel debug<br>
info from the following centos kernel debuginfo package:<br>
<br>
kernel-debuginfo-2.6.32-504.23.4.el6.centos.plus.x86_64.rpm<br>
<br>
It complains about a version mismatch between the vmlinux and core file.<br>
I'm no crash expert... are you sure the cores above correspond to this<br>
kernel? Does crash load up for you on said box if you run something like<br>
the following?<br>
<br>
crash /usr/lib/debug/lib/modules/.../vmlinux vmcore<br>
<br>
Note that you might need to install the above kernel-debuginfo package<br>
to get the debug (vmlinux) file. If so, could you also upload that<br>
debuginfo rpm somewhere?<br>
<div class="HOEnZb"><div class="h5"><br>
Brian<br>
<br>
> Thanks // Hugo<br>
><br>
><br>
> 2015-07-09 20:51 GMT+08:00 Brian Foster <<a href="mailto:bfoster@redhat.com">bfoster@redhat.com</a>>:<br>
><br>
> > On Thu, Jul 09, 2015 at 06:57:55PM +0800, Kuo Hugo wrote:<br>
> > > Hi Folks,<br>
> > ><br>
> > > As the results of 32 disks with xfs_repair -n seems no any error shows<br>
> > up.<br>
> > > We currently tried to deploy CentOS 6.6 for testing. (The previous kernel<br>
> > > panic was came from Ubuntu).<br>
> > > The CentOS nodes encountered kernel panic with same daemon but the<br>
> > problem<br>
> > > may a bit differ.<br>
> > ><br>
> > > - It was broken on xfs_dir2_sf_get_parent_ino+0xa/0x20 in Ubuntu.<br>
> > > - Here’s the log in CentOS. It’s broken on<br>
> > > xfs_dir2_sf_getdents+0x2a0/0x3a0<br>
> > ><br>
> ><br>
> > I'd venture to guess it's the same behavior here. The previous kernel<br>
> > had a callback for the parent inode number that was called via<br>
> > xfs_dir2_sf_getdents(). Taking a look at a 6.6 kernel, it has a static<br>
> > inline here instead.<br>
> ><br>
> > > <1>BUG: unable to handle kernel NULL pointer dereference at<br>
> > 0000000000000001<br>
> > > <1>IP: [<ffffffffa0362d60>] xfs_dir2_sf_getdents+0x2a0/0x3a0 [xfs]<br>
> > > <4>PGD 1072327067 PUD 1072328067 PMD 0<br>
> > > <4>Oops: 0000 [#1] SMP<br>
> > > <4>last sysfs file:<br>
> > ><br>
> > /sys/devices/pci0000:80/0000:80:03.2/0000:83:00.0/host10/port-10:1/expander-10:1/port-10:1:16/end_device-10:1:16/target10:0:25/10:0:25:0/block/sdz/queue/rotational<br>
> > > <4>CPU 17<br>
> > > <4>Modules linked in: xt_conntrack tun xfs exportfs iptable_filter<br>
> > > ipt_REDIRECT iptable_nat nf_nat nf_conntrack_ipv4 nf_conntrack<br>
> > > nf_defrag_ipv4 ip_tables ip_vs ipv6 libcrc32c iTCO_wdt<br>
> > > iTCO_vendor_support ses enclosure igb i2c_algo_bit sb_edac edac_core<br>
> > > i2c_i801 i2c_core sg shpchp lpc_ich mfd_core ixgbe dca ptp pps_core<br>
> > > mdio power_meter acpi_ipmi ipmi_si ipmi_msghandler ext4 jbd2 mbcache<br>
> > > sd_mod crc_t10dif mpt3sas scsi_transport_sas raid_class xhci_hcd ahci<br>
> > > wmi dm_mirror dm_region_hash dm_log dm_mod [last unloaded:<br>
> > > scsi_wait_scan]<br>
> > > <4><br>
> > > <4>Pid: 4454, comm: swift-object-se Not tainted<br>
> > > 2.6.32-504.23.4.el6.x86_64 #1 Silicon Mechanics Storform<br>
> > > R518.v5P/X10DRi-T4+<br>
> > > <4>RIP: 0010:[<ffffffffa0362d60>] [<ffffffffa0362d60>]<br>
> > > xfs_dir2_sf_getdents+0x2a0/0x3a0 [xfs]<br>
> > > <4>RSP: 0018:ffff880871f6de18 EFLAGS: 00010202<br>
> > > <4>RAX: 0000000000000000 RBX: 0000000000000004 RCX: 0000000000000000<br>
> > > <4>RDX: 0000000000000001 RSI: 0000000000000000 RDI: 00007faa74006203<br>
> > > <4>RBP: ffff880871f6de68 R08: 000000032eb04bc9 R09: 0000000000000004<br>
> > > <4>R10: 0000000000008030 R11: 0000000000000246 R12: 0000000000000000<br>
> > > <4>R13: 0000000000000002 R14: ffff88106eff7000 R15: ffff8808715b4580<br>
> > > <4>FS: 00007faa85425700(0000) GS:ffff880028360000(0000)<br>
> > knlGS:0000000000000000<br>
> > > <4>CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033<br>
> > > <4>CR2: 0000000000000001 CR3: 0000001072325000 CR4: 00000000001407e0<br>
> > > <4>DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000<br>
> > > <4>DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400<br>
> > > <4>Process swift-object-se (pid: 4454, threadinfo ffff880871f6c000,<br>
> > > task ffff880860f18ab0)<br>
> > > <4>Stack:<br>
> > > <4> ffff880871f6de28 ffffffff811a4bb0 ffff880871f6df38 ffff880874749cc0<br>
> > > <4><d> 0000000100000103 ffff8802381f8c00 ffff880871f6df38<br>
> > ffff8808715b4580<br>
> > > <4><d> 0000000000000082 ffff8802381f8d88 ffff880871f6dec8<br>
> > ffffffffa035ab31<br>
> > > <4>Call Trace:<br>
> > > <4> [<ffffffff811a4bb0>] ? filldir+0x0/0xe0<br>
> > > <4> [<ffffffffa035ab31>] xfs_readdir+0xe1/0x130 [xfs]<br>
> > > <4> [<ffffffff811a4bb0>] ? filldir+0x0/0xe0<br>
> > > <4> [<ffffffffa038fe29>] xfs_file_readdir+0x39/0x50 [xfs]<br>
> > > <4> [<ffffffff811a4e30>] vfs_readdir+0xc0/0xe0<br>
> > > <4> [<ffffffff8119bd86>] ? final_putname+0x26/0x50<br>
> > > <4> [<ffffffff811a4fb9>] sys_getdents+0x89/0xf0<br>
> > > <4> [<ffffffff8100b0f2>] system_call_fastpath+0x16/0x1b<br>
> > > <4>Code: 01 00 00 00 48 c7 c6 38 6b 3a a0 48 8b 7d c0 ff 55 b8 85 c0<br>
> > > 0f 85 af 00 00 00 49 8b 37 e9 ec fd ff ff 66 0f 1f 84 00 00 00 00 00<br>
> > > <41> 80 7c 24 01 00 0f 84 9c 00 00 00 45 0f b6 44 24 03 41 0f b6<br>
> > > <1>RIP [<ffffffffa0362d60>] xfs_dir2_sf_getdents+0x2a0/0x3a0 [xfs]<br>
> > > <4> RSP <ffff880871f6de18><br>
> > > <4>CR2: 0000000000000001<br>
> > ><br>
> > ...<br>
> > ><br>
> > > I’ve got the vmcore dump from operator. Does vmcore help for<br>
> > > troubleshooting kind issue ?<br>
> > ><br>
> ><br>
> > Hmm, well it couldn't hurt. Is the vmcore based on this 6.6 kernel? Can<br>
> > you provide the exact kernel version and post the vmcore somewhere?<br>
> ><br>
> > Brian<br>
> ><br>
> > > Thanks // Hugo<br>
> > > <br>
> > ><br>
> > > 2015-06-18 22:59 GMT+08:00 Eric Sandeen <<a href="mailto:sandeen@sandeen.net">sandeen@sandeen.net</a>>:<br>
> > ><br>
> > > > On 6/18/15 9:29 AM, Kuo Hugo wrote:<br>
> > > > >>- Have you tried an 'xfs_repair -n' of the affected filesystem? Note<br>
> > > > that -n will report problems only and prevent any modification by<br>
> > repair.<br>
> > > > ><br>
> > > > > *We might to to xfs_repair if we can address which disk causes the<br>
> > > > issue. *<br>
> > > ><br>
> > > > If you do, please save the output, and if it finds anything, please<br>
> > > > provide the output in this thread.<br>
> > > ><br>
> > > > Thanks,<br>
> > > > -Eric<br>
> > > ><br>
> ><br>
> > > _______________________________________________<br>
> > > xfs mailing list<br>
> > > <a href="mailto:xfs@oss.sgi.com">xfs@oss.sgi.com</a><br>
> > > <a href="http://oss.sgi.com/mailman/listinfo/xfs" rel="noreferrer" target="_blank">http://oss.sgi.com/mailman/listinfo/xfs</a><br>
> ><br>
> ><br>
</div></div></blockquote></div><br></div>