[Top] [All Lists]

Re: route cache DoS testing and softirqs

To: Robert Olsson <Robert.Olsson@xxxxxxxxxxx>
Subject: Re: route cache DoS testing and softirqs
From: Dipankar Sarma <dipankar@xxxxxxxxxx>
Date: Thu, 1 Apr 2004 02:07:50 +0530
Cc: Andrea Arcangeli <andrea@xxxxxxx>, "David S. Miller" <davem@xxxxxxxxxx>, kuznet@xxxxxxxxxxxxx, linux-kernel@xxxxxxxxxxxxxxx, netdev@xxxxxxxxxxx, paulmck@xxxxxxxxxx, akpm@xxxxxxxx
In-reply-to: <16491.4593.718724.277551@xxxxxxxxxxxx>
References: <20040329222926.GF3808@xxxxxxxxxxxxxxxxx> <200403302005.AAA00466@xxxxxxxxxxxxxxx> <20040330211450.GI3808@xxxxxxxxxxxxxxxxx> <20040330133000.098761e2.davem@xxxxxxxxxx> <20040330213742.GL3808@xxxxxxxxxxxxxxxxx> <20040331171023.GA4543@xxxxxxxxxx> <16491.4593.718724.277551@xxxxxxxxxxxx>
Reply-to: dipankar@xxxxxxxxxx
Sender: netdev-bounce@xxxxxxxxxxx
User-agent: Mutt/1.4.1i
On Wed, Mar 31, 2004 at 08:46:09PM +0200, Robert Olsson wrote:
Content-Description: message body text
> Before run
> total    droppped tsquz    throttl  bh_enbl  ksoftird irqexit  other   
> 00000000 00000000 00000000 00000000 000000e8 0000017e 00030411 00000000
> 00000000 00000000 00000000 00000000 000000ae 00000277 00030349 00000000
> After DoS (See description from previous mail)
> total    droppped tsquz    throttl  bh_enbl  ksoftird irqexit  other    
> 00164c55 00000000 000021de 00000000 000000fc 0000229f 0003443c 00000000
> 001695e7 00000000 0000224d 00000000 00000162 0000236f 000342f7 00000000
> So the major part of softirq's are run from irqexit and therefor out of 
> scheduler control. This even with RX polling (eth0, eth2) We still have 
> some TX interrupts plus timer interrupts now at 1000Hz. Which probably 
> reduces the number of softirq's that ksoftirqd runs.

So, NAPI or not we get userland stalls due to packetflooding.

Looking at some of the old patches we discussed privately, it seems
this is what was done earlier -

1. Use rcu-softirq.patch which provides call_rcu_bh() for softirqs

2. Limit non-ksoftirqd softirqs - get a measure of userland stall (using
   an api rcu_grace_period(cpu)) and if it is too long, expire
   the timeslice of the current process  and start sending everything to 

By reducing the softirq time at the back of a hardirq or local_bh_enable(),
we should be able to bring a bit more fairness. I am working on the
patches, will test and publish later. 


<Prev in Thread] Current Thread [Next in Thread>