Hi,
while thinking about how to fully replace the BKL-based by lock_sock()-based
socket locking inside the AF_X25 stack, I encountered some questions:
The X.25 stack currently uses the network core's support function
sock_alloc_send_skb() to allocate outgoing buffer space. It seems that
this is not designed for use with lock_sock()-based socket locking.
All other connection oriented protocols which apply lock_sock()
(currently only tcp and decnet) seem to implement their private equivalent
of sock_alloc_send_skb() which releases the lock while waiting for write
buffer space. Should I also do so for X.25 or would it make sense to
rework sock_alloc_send_skb() such that socket lock is released while
waiting for write buffer space?
With current BKL based locking, the kernel lock is released whenever
the process is waiting for GFP_KERNEL memory. But with lock_sock(),
the socket remains locked in such situations. Couldn't that cause
problems during low-memory conditions? The process could be blocked
-- with the lock hold -- for a rather long time.
While socket is locked, all incoming packets are queued at
sk->backlock without any precautions (neither seize limits nor
flow control affect sk->backlog queuing). It seems like feeding
lots of incoming packets while socket is locked could eat up
lots of kernel memory and even be exploited for a denial-of-service attack.
Henner
|