xfs
[Top] [All Lists]

Re: BUG: soft lockup detected on CPU#1!

To: Justin Piszcz <jpiszcz@xxxxxxxxxxxxxxx>
Subject: Re: BUG: soft lockup detected on CPU#1!
From: <raksac@xxxxxxxxx>
Date: Wed, 11 Feb 2009 15:34:09 -0800 (PST)
Cc: xfs@xxxxxxxxxxx
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024; t=1234395249; bh=jkxamjGkZvfA6A7WN9NqNdgw+gjA7g3VZ4K/rKjrZiI=; h=Message-ID:Received:Date:From:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding; b=Hdq76QBqn/IfA1Ihv73J93rNIN55qOPMctJfm8kR006kgUFO6lUsgoETyMIWuFn2240zT5iL/GZIt2YpbBR05bQoffRDywNQnVFWQ7AyU+g5pN9ZSln28mSpbxgzLFTL5pvl85APlYb0B++YBkKaz/nvMn0f8MbQEXY5jtDdohw=
Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=Message-ID:Received:Date:From:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding; b=ni6x4h7mn7Ph2PJeInxIZ6qRXClTR7y7WM4uoPPY7DQOWPOt6C1xXDlk8qtq3/wXwwmq4l2NvmAt+MlbVXZKe79DCpAMMrDNwiS4POamtAh7xhOSGBBS20RQB4/NlWVODrvuEOpmNeQLYhFUDnXiF+lH5/XRXrbeYz8HN6Rz5fk=;
In-reply-to: <alpine.DEB.1.10.0902110420520.13264@xxxxxxxxxxxxxxxx>
With debug enabled it fails with this -

BUG: unable to handle kernel NULL pointer dereference
at virtual address 00000000
 printing eip:
f8bd02c2
*pde = d658e067
Assertion failed: atomic_read(&ip->i_pincount) > 0,
file: fs/xfs/xfs_inode.c, line: 2703
------------[ cut here ]------------
Kernel BUG at [verbose debug info unavailable]
invalid opcode: 0000 [#1]
PREEMPT SMP 
last sysfs file:
/devices/pci0000:00/0000:00:1f.3/i2c-0/0-002e/temp2_input
Modules linked in: sg xfs sunrpc m24c02 pca9554
pca9555 mcp23016 lm85 hwmon_vid i2c_i801 i2c_core
midplane uhci_hcd sk98lin tg3 e1000 mv_sata sd_mod
ahci libata
CPU:    0
EIP:    0060:[<f8bfde19>]    Not tainted VLI
EFLAGS: 00010296   (2.6.18.rhel5 #2) 
EIP is at assfail+0xd/0x13 [xfs]
eax: 0000005c   ebx: e1fb4980   ecx: e4f46000   edx:
00000000
esi: 00000007   edi: e1fb0a58   ebp: 00000007   esp:
e4f47eb8
ds: 007b   es: 007b   ss: 0068
Process xfslogd/0 (pid: 4370, ti=e4f46000
task=75118aa0 task.ti=e4f46000)
Stack: f8c10ccb f8c090ae f8c08d97 00000a8f f8bd0dca
00000526 f8be8e28 00000526 
       00000007 e9db1d80 e861c008 e9db1da0 e861c000
00000003 e9db1d80 e9db1ca4 
       e9db1ca0 00000000 f8be8f56 00000000 00000000
e9db1ca4 ea449880 ea449800 
Call Trace:
 [<f8bd0dca>] xfs_iunpin+0x21/0x49 [xfs]
 [<f8be8e28>] xfs_trans_chunk_committed+0xc3/0xe6
[xfs]
 [<f8be8f56>] xfs_trans_committed+0x38/0xd1 [xfs]
 [<f8bdba5c>] xlog_state_do_callback+0x1b7/0x329 [xfs]
 [<f8bf66b7>] xfs_buf_iodone_work+0x41/0x63 [xfs]
 [<401294c5>] run_workqueue+0x71/0xae
 [<f8bf6676>] xfs_buf_iodone_work+0x0/0x63 [xfs]
 [<40129666>] worker_thread+0xd9/0x10a
 [<40116ca2>] default_wake_function+0x0/0xc
 [<4012958d>] worker_thread+0x0/0x10a
 [<4012be6d>] kthread+0xc1/0xec
 [<4012bdac>] kthread+0x0/0xec
 [<401038b3>] kernel_thread_helper+0x7/0x10
 =======================
Code: d2 df 51 47 89 ea b8 94 59 c2 f8 e8 a9 37 6a 47
83 c4 0c 85 ff 75 02 0f 0b 5b 5e 5f 5d c3 51 52 50 68
cb 0c c1 f8 e8 ab df 51 47 <0f> 0b 83 c4 10 c3 55 57
56 53 83 ec 14 89 44 24 04 89 d7 89 cd 
EIP: [<f8bfde19>] assfail+0xd/0x13 [xfs] SS:ESP
0068:e4f47eb8
 <0>Kernel panic - not syncing: Fatal exception

--- Justin Piszcz <jpiszcz@xxxxxxxxxxxxxxx> wrote:

> 
> On Tue, 10 Feb 2009, raksac@xxxxxxxxx wrote:
> 
> >
> > Hello,
> >
> > I am running the 2.6.28 based xfs kernel driver on
> a
> > custom kernel with following kernel config
> enabled.
> >
> > CONFIG_PREEMPT
> > CONFIG_DETECT_SOFTLOCKUP
> >
> > Running the following xfsqa causes a soft lockup.
> The
> > configuration is a x86 with Hyperthreading, 4GB
> RAM
> > and a AHCI connected JBOD. Its 100% reproducible.
> >
> > Any suggestions/inputs on where to start debugging
> the
> > problem would be much appreciated.
> >
> > #! /bin/sh
> > # FS QA Test No. 008
> > #
> > # randholes test
> > #
> >
> > BUG: soft lockup detected on CPU#1!
> > [<4013d525>] softlockup_tick+0x9c/0xaf
> > [<40123246>] update_process_times+0x3d/0x60
> > [<401100ab>] smp_apic_timer_interrupt+0x52/0x58
> > [<40103633>] apic_timer_interrupt+0x1f/0x24
> > [<402a1557>] _spin_lock_irqsave+0x48/0x61
> > [<f8b8fe30>] xfs_iflush_cluster+0x16d/0x31c [xfs]
> > [<f8b9018b>] xfs_iflush+0x1ac/0x271 [xfs]
> > [<f8ba49a1>] xfs_inode_flush+0xd6/0xfa [xfs]
> > [<f8bb13c8>] xfs_fs_write_inode+0x27/0x40 [xfs]
> > [<401789d9>] __writeback_single_inode+0x1b0/0x2ff
> > [<40101ad5>] __switch_to+0x23/0x1f9
> > [<40178f87>] sync_sb_inodes+0x196/0x261
> > [<4017920a>] writeback_inodes+0x67/0xb1
> > [<401465df>] wb_kupdate+0x7b/0xe0
> > [<40146bc3>] pdflush+0x0/0x1b5
> > [<40146ce1>] pdflush+0x11e/0x1b5
> > [<40146564>] wb_kupdate+0x0/0xe0
> > [<4012be6d>] kthread+0xc1/0xec
> > [<4012bdac>] kthread+0x0/0xec
> > [<401038b3>] kernel_thread_helper+0x7/0x10
> > =======================
> >
> > Thanks,
> > Rakesh
> >
> >
> >
> >
> > _______________________________________________
> > xfs mailing list
> > xfs@xxxxxxxxxxx
> > http://oss.sgi.com/mailman/listinfo/xfs
> >
> 
> There were some pretty nasty bugs in 2.6.28 for XFS,
> can you reproduce it on 
> 2.6.28.4?
> 



      

<Prev in Thread] Current Thread [Next in Thread>