xfs-masters
[Top] [All Lists]

[xfs-masters] [Bug 3118] another crash on my nfs-server

To: xfs-masters@xxxxxxxxxxx
Subject: [xfs-masters] [Bug 3118] another crash on my nfs-server
From: bugme-daemon@xxxxxxxx
Date: Sun, 1 Aug 2004 11:43:34 -0700
Reply-to: xfs-masters@xxxxxxxxxxx
Sender: xfs-masters-bounce@xxxxxxxxxxx
http://bugme.osdl.org/show_bug.cgi?id=3118





------- Additional Comments From janfrode@xxxxxxxxxxxxxxx  2004-08-01 11:43 
-------

Darn.. down again (before the 8Kstack kernel was ready):

Unable to handle kernel paging request at virtual address 00020004
 printing eip:
c013e562
*pde = 00000000
Oops: 0002 [#1]
SMP
Modules linked in: ipv6 lp ipt_REJECT ipt_state ip_conntrack iptable_filter
ip_tables ohci_hcd dm_mod
CPU:    1
EIP:    0060:[<c013e562>]    Not tainted
EFLAGS: 00010016   (2.6.8-rc2)
EIP is at free_block+0x52/0xe0
eax: 00020000   ebx: da46d000   ecx: da46d040   edx: d88540a0
esi: f7ca0380   edi: 00000000   ebp: f7ca0398   esp: f7c52e8c
ds: 007b   es: 007b   ss: 0068
Process kswapd0 (pid: 43, threadinfo=f7c52000 task=f7c7ebd0)
Stack: f7ca03a8 0000001b f7cdf070 f7cdf070 00000296 f2428520 f7cd1000 c013e697
       0000001b f7cdf060 f7ca0380 f7cdf060 00000296 f2428520 00000080 c013ea09
       f2428540 f7c52efc 0000000f c016b968 f2428540 c016bc63 f144b988 f144b980
Call Trace:
 [<c013e697>] cache_flusharray+0xa7/0xb0
 [<c013ea09>] kmem_cache_free+0x49/0x50
 [<c016b968>] destroy_inode+0x28/0x40
 [<c016bc63>] dispose_list+0x43/0x80
 [<c016bf7a>] prune_icache+0xca/0x200
 [<c016c0e5>] shrink_icache_memory+0x35/0x40
 [<c01409a6>] shrink_slab+0x126/0x190
 [<c0141f2a>] balance_pgdat+0x1ea/0x270
 [<c0142085>] kswapd+0xd5/0xe0
 [<c011b3d0>] autoremove_wake_function+0x0/0x50
 [<c011b3d0>] autoremove_wake_function+0x0/0x50
 [<c0141fb0>] kswapd+0x0/0xe0
 [<c0103fdd>] kernel_thread_helper+0x5/0x18
Code: 89 50 04 89 02 8b 43 0c c7 03 00 01 10 00 31 d2 c7 43 04 00


------- You are receiving this mail because: -------
You are the assignee for the bug, or are watching the assignee.

<Prev in Thread] Current Thread [Next in Thread>