devfs
[Top] [All Lists]

[PATCH][2.5.73] stack corruption in devfs_lookup

To: linux-kernel@xxxxxxxxxxxxxxx
Subject: [PATCH][2.5.73] stack corruption in devfs_lookup
From: Andrey Borzenkov <arvidjaar@xxxxxxx>
Date: Sun, 6 Jul 2003 21:06:53 +0400
Cc: devfs@xxxxxxxxxxx
In-reply-to: <Pine.LNX.4.55.0305050005230.1278@marabou.research.att.com>
References: <E198K0q-000Am8-00.arvidjaar-mail-ru@f23.mail.ru> <Pine.LNX.4.55.0304231157560.1309@marabou.research.att.com> <Pine.LNX.4.55.0305050005230.1278@marabou.research.att.com>
Sender: devfs-bounce@xxxxxxxxxxx
User-agent: KMail/1.5
Doing concurrent lookups for the same name in devfs with devfsd and modules 
enabled may result in stack coruption.

When devfs_lookup needs to call devfsd it arranges for other lookups for the 
same name to wait. It is using local variable as wait queue head. After 
devfsd returns devfs_lookup wakes up all waiters and returns. Unfortunately 
there is no garantee all waiters will actually get chance to run and clean up 
before devfs_lookup returns. so some of them attempt to access already freed 
storage on stack.

It is trivial to trigger with SMP kernel (I have single-CPU system if it 
matters) doing

while true
do
  ls /dev/foo &
done

With spinlock debug enabled this results in large number of stacks similar to

------------[ cut here ]------------
kernel BUG at include/asm/spinlock.h:120!
invalid operand: 0000 [#1]
CPU:    0
EIP:    0060:[<c012004c>]    Tainted: G S
EFLAGS: 00010082
EIP is at remove_wait_queue+0xac/0xc0
eax: 0000000e   ebx: f6617e4c   ecx: 00000000   edx: 00000001
esi: f6747dd0   edi: f6616000   ebp: f6617e10   esp: f6617df0
ds: 007b   es: 007b   ss: 0068
Process ls (pid: 1517, threadinfo=f6616000 task=f6619900)
Stack: c03eb9d5 c011ffa0 00000286 f6617e24 c0443880 f6747dd0 f6616000 f6617e4c 
       f6617e78 c01cb3e6 c04470c0 f6616000 00000246 f6747dcc c1a6f1dc 00000000 
       f6619900 c011d4e0 00000000 00000000 f7d4b73c f663d005 f6759828 00000000
Call Trace:
 [<c011ffa0>] remove_wait_queue+0x0/0xc0
 [<c01cb3e6>] devfs_d_revalidate_wait+0x1d6/0x1f0
 [<c011d4e0>] default_wake_function+0x0/0x30
 [<c011d4e0>] default_wake_function+0x0/0x30
 [<c017201a>] do_lookup+0x5a/0xa0
 [<c017261e>] link_path_walk+0x5be/0xb20
 [<c0148ceb>] kmem_cache_alloc+0x14b/0x190
 [<c01730fe>] __user_walk+0x3e/0x60
 [<c016d13e>] vfs_stat+0x1e/0x60
 [<c0154c5b>] do_brk+0x12b/0x200
 [<c016d7bb>] sys_stat64+0x1b/0x40
 [<c01532e2>] sys_brk+0xf2/0x120
 [<c011a820>] do_page_fault+0x0/0x4c5
 [<c0109919>] sysenter_past_esp+0x52/0x71

Code: 0f 0b 78 00 6c b0 3e c0 e9 72 ff ff ff 8d b4 26 00 00 00 00
 <6>note: ls[1517] exited with preempt_count 1
eip: c011ffa0

without spinlock debug system usually hung dead with reset button as the only 
possibility.

I was not able to reproduce it on 2.4 on single-CPU system - in 2.4 
devfs_d_revalidate_wait does not attempt to remove itself from wait queue so 
it appears to be safe.

attached patch is against 2.5.73 but applies to 2.5.74 as well. It makes 
lookup struct be allocated from heap and adds reference counter to free it 
when no more needed.

regards

-andrey


Attachment: 2.5.73-devfs_stack_corruption.patch
Description: Text Data

<Prev in Thread] Current Thread [Next in Thread>