Re: [Make-wifi-fast] fq_codel_drop vs a udp flood

dpreed at dpreed at
Sun May 1 07:47:23 PDT 2016

Maybe I missed something, but why is it important to optimize for a UDP flood?

A general observation of control theory is that there is almost always an adversarial strategy that will destroy any control regime. Sometimes one has to invoke an "oracle" that knows the state of the control system at all times to get there.

So a handwave is that *there is always a DDoS that will work* no matter how clever you are.

And the corollary is illustrated by the TSA. If you can't anticipate all possible attacks, it is not clearly better to just congest the whole system at all times with controls that can't possibly solve all possible attacks - i.e. Security Theater. We don't want "anti-DDoS theater" I don't think.

There is an alternative mechanism that has been effective at dealing with DDoS in general - track the disruption back to the source and kill it.  (this is what the end-to-end argument would be: don't try to solve a fundamentally end-to-end problem, DDoS, solely in the network [switches], since you have to solve it at the edges anyway. Just include in the network things that will help you solve it at the edges - traceback tools that work fast and targeted shutdown of sources).

I don't happen to know of a "normal" application that benefits from UDP flooding - not even "gossip protocols" do that!

In context, then, let's not focus on UDP flood performance (or any other "extreme case" that just seems fun to work on in a research paper because it is easy to state compared to the real world) too much.

I know that the reaction to this post will be to read it and pretty much go on as usual focusing on UDP floods. But I have to try. There are so many more important issues (like understanding how to use congestion signalling in gossip protocols, gaming, or live AV conferencing better, as some related examples, which are end-to-end problems for which queue management and congestion signalling are truly crucial).

On Sunday, May 1, 2016 1:23am, "Dave Taht" <dave.taht at> said:

> On Sat, Apr 30, 2016 at 10:08 PM, Ben Greear <greearb at> wrote:
>> On 04/30/2016 08:41 PM, Dave Taht wrote:
>>> There were a few things on this thread that went by, and I wasn't on
>>> the ath10k list
>>> (
>>> first up, udp flood...
>>>>>> From: ath10k <ath10k-boun... at> on behalf of Roman
>>>>>> Yeryomin < at>
>>>>>> Sent: Friday, April 8, 2016 8:14 PM
>>>>>> To: ath10k at
>>>>>> Subject: ath10k performance, master branch from 20160407
>>>>>> Hello!
>>>>>> I've seen performance patches were commited so I've decided to give it
>>>>>> a try (using 4.1 kernel and backports).
>>>>>> The results are quite disappointing: TCP download (client pov) dropped
>>>>>> from 750Mbps to ~550 and UDP shows completely weird behavour - if
>>>>>> generating 900Mbps it gives 30Mbps max, if generating 300Mbps it gives
>>>>>> 250Mbps, before (latest official backports release from January) I was
>>>>>> able to get 900Mbps.
>>>>>> Hardware is basically ap152 + qca988x 3x3.
>>>>>> When running perf top I see that fq_codel_drop eats a lot of cpu.
>>>>>> Here is the output when running iperf3 UDP test:
>>>>>>      45.78%  [kernel]       [k] fq_codel_drop
>>>>>>       3.05%  [kernel]       [k] ag71xx_poll
>>>>>>       2.18%  [kernel]       [k] skb_release_data
>>>>>>       2.01%  [kernel]       [k] r4k_dma_cache_inv
>>> The udp flood behavior is not "weird".  The test is wrong. It is so
>>> filling
>>> the local queue as to dramatically exceed the bandwidth on the link.
>> It would be nice if you could provide backpressure so that you could
>> simply select on the udp socket and use that to know when you can send
>> more frames??
> The qdisc version returns  NET_XMIT_CN to the upper layers of the
> stack in the case
> where the dropped packet's flow = the ingress packet's flow, but that
> is after the
> exhaustive search...
> I don't know what effect (if any) that had on udp sockets. Hmm... will
> look. Eric would "just know".
> That might provide more backpressure in the local scenario. SO_SND_BUF
> should interact with this stuff in some sane way...
> ... but over the wire from a test driver box elsewhere, tho, aside
> from ethernet flow control itself, where enabled, no.
> ... but in that case you have a much lower inbound/outbound
> performance disparity in the general case to start with... which can
> still be quite high...
>> Any idea how that works with codel?
> Beautifully.
> For responsive TCP flows. It immediately reduces the window without a RTT.
>> Thanks,
>> Ben
>> --
>> Ben Greear <greearb at>
>> Candela Technologies Inc
> --
> Dave Täht
> Let's go make home routers and wifi faster! With better software!
> _______________________________________________
> Make-wifi-fast mailing list
> Make-wifi-fast at

More information about the ath10k mailing list