[Make-wifi-fast] fq_codel_drop vs a udp flood

Roman Yeryomin leroi.lists at gmail.com
Thu May 5 06:55:45 PDT 2016


On 2 May 2016 at 21:40, Dave Taht <dave.taht at gmail.com> wrote:
> On Mon, May 2, 2016 at 7:03 AM, Roman Yeryomin <leroi.lists at gmail.com> wrote:
>> On 1 May 2016 at 17:47,  <dpreed at reed.com> wrote:
>>> Maybe I missed something, but why is it important to optimize for a UDP flood?
>>
>> We don't need to optimize it to UDP but UDP is used e.g. by torrents
>> to achieve higher throughput and used a lot in general.
>
> Torrents use uTP congestion control and won't hit this function at
> all. And eric just made fq_codel_drop more efficient for tests that
> do.
>
> There are potentially zillions of other issues with ampdu's, txop
> usage, aggregate "packing", etc that can also affect and other
> protocools.
>
>> And, again, in this case TCP is broken too (750Mbps down to 550), so
>> it's not like Dave is saying that UDP test is broken, fq_codel is just
>> too hungry for CPU
>
> "fq_codel_drop" was too hungry for cpu. fixed. thx eric. :)
>
> I've never seen ath10k tcp throughput in the real world (e.g not wired
> up, over the air) even close to 750 under test on the ath10k (I've
> seen 300, and I'm getting some better gear up this week)... and
> everybody tests wifi differently.

perhaps you didn't have 3x3 client and AP?

> (for the record, what was your iperf tcp test line?). More people
> testing differently = good.

iperf3 -c <server_ip> -t600

> Did fq_codel_drop show up in the perf trace for the tcp test?

yes, but it was less hungry, something about 15-20% if I remember correctly

> (More likely you would have seen timestamping rise significantly for
> the tcp test, as well as enqueue time)
>
> That said, more people testing the same ways, good too.
>
> I'd love it if you could re-run your test via flent, rather than
> iperf, and look at the tcp sawtooth or lack thereof, and the overall
> curve of the throughput, before and after this set of commits.

I guess I should try flent but the performance drop was too evident
even with iperf

> Flent can be made to run on osx via macports or brew. (much easier to
> get running on linux) And try to tag along on observing/fixing low
> wifi rate behavior?
>
> This was the more recent dql vs wifi test:
>
> http://blog.cerowrt.org/post/dql_on_wifi_2/
>
> and series.
>
>>> A general observation of control theory is that there is almost always an adversarial strategy that will destroy any control regime. Sometimes one has to invoke an "oracle" that knows the state of the control system at all times to get there.
>>>
>>> So a handwave is that *there is always a DDoS that will work* no matter how clever you are.
>>>
>>> And the corollary is illustrated by the TSA. If you can't anticipate all possible attacks, it is not clearly better to just congest the whole system at all times with controls that can't possibly solve all possible attacks - i.e. Security Theater. We don't want "anti-DDoS theater" I don't think.
>>>
>>> There is an alternative mechanism that has been effective at dealing with DDoS in general - track the disruption back to the source and kill it.  (this is what the end-to-end argument would be: don't try to solve a fundamentally end-to-end problem, DDoS, solely in the network [switches], since you have to solve it at the edges anyway. Just include in the network things that will help you solve it at the edges - traceback tools that work fast and targeted shutdown of sources).
>>>
>>> I don't happen to know of a "normal" application that benefits from UDP flooding - not even "gossip protocols" do that!
>>>
>>> In context, then, let's not focus on UDP flood performance (or any other "extreme case" that just seems fun to work on in a research paper because it is easy to state compared to the real world) too much.
>>>
>>> I know that the reaction to this post will be to read it and pretty much go on as usual focusing on UDP floods. But I have to try. There are so many more important issues (like understanding how to use congestion signalling in gossip protocols, gaming, or live AV conferencing better, as some related examples, which are end-to-end problems for which queue management and congestion signalling are truly crucial).
>>>
>>>
>>>
>>> On Sunday, May 1, 2016 1:23am, "Dave Taht" <dave.taht at gmail.com> said:
>>>
>>>> On Sat, Apr 30, 2016 at 10:08 PM, Ben Greear <greearb at candelatech.com> wrote:
>>>>>
>>>>>
>>>>> On 04/30/2016 08:41 PM, Dave Taht wrote:
>>>>>>
>>>>>> There were a few things on this thread that went by, and I wasn't on
>>>>>> the ath10k list
>>>>>>
>>>>>> (https://www.mail-archive.com/ath10k@lists.infradead.org/msg04461.html)
>>>>>>
>>>>>> first up, udp flood...
>>>>>>
>>>>>>>>> From: ath10k <ath10k-boun... at lists.infradead.org> on behalf of Roman
>>>>>>>>> Yeryomin <leroi.li... at gmail.com>
>>>>>>>>> Sent: Friday, April 8, 2016 8:14 PM
>>>>>>>>> To: ath10k at lists.infradead.org
>>>>>>>>> Subject: ath10k performance, master branch from 20160407
>>>>>>>>>
>>>>>>>>> Hello!
>>>>>>>>>
>>>>>>>>> I've seen performance patches were commited so I've decided to give it
>>>>>>>>> a try (using 4.1 kernel and backports).
>>>>>>>>> The results are quite disappointing: TCP download (client pov) dropped
>>>>>>>>> from 750Mbps to ~550 and UDP shows completely weird behavour - if
>>>>>>>>> generating 900Mbps it gives 30Mbps max, if generating 300Mbps it gives
>>>>>>>>> 250Mbps, before (latest official backports release from January) I was
>>>>>>>>> able to get 900Mbps.
>>>>>>>>> Hardware is basically ap152 + qca988x 3x3.
>>>>>>>>> When running perf top I see that fq_codel_drop eats a lot of cpu.
>>>>>>>>> Here is the output when running iperf3 UDP test:
>>>>>>>>>
>>>>>>>>>      45.78%  [kernel]       [k] fq_codel_drop
>>>>>>>>>       3.05%  [kernel]       [k] ag71xx_poll
>>>>>>>>>       2.18%  [kernel]       [k] skb_release_data
>>>>>>>>>       2.01%  [kernel]       [k] r4k_dma_cache_inv
>>>>>>
>>>>>>
>>>>>> The udp flood behavior is not "weird".  The test is wrong. It is so
>>>>>> filling
>>>>>> the local queue as to dramatically exceed the bandwidth on the link.
>>>>>
>>>>>
>>>>> It would be nice if you could provide backpressure so that you could
>>>>> simply select on the udp socket and use that to know when you can send
>>>>> more frames??
>>>>
>>>> The qdisc version returns  NET_XMIT_CN to the upper layers of the
>>>> stack in the case
>>>> where the dropped packet's flow = the ingress packet's flow, but that
>>>> is after the
>>>> exhaustive search...
>>>>
>>>> I don't know what effect (if any) that had on udp sockets. Hmm... will
>>>> look. Eric would "just know".
>>>>
>>>> That might provide more backpressure in the local scenario. SO_SND_BUF
>>>> should interact with this stuff in some sane way...
>>>>
>>>> ... but over the wire from a test driver box elsewhere, tho, aside
>>>> from ethernet flow control itself, where enabled, no.
>>>>
>>>> ... but in that case you have a much lower inbound/outbound
>>>> performance disparity in the general case to start with... which can
>>>> still be quite high...
>>>>
>>>>>
>>>>> Any idea how that works with codel?
>>>>
>>>> Beautifully.
>>>>
>>>> For responsive TCP flows. It immediately reduces the window without a RTT.
>>>>
>>>>> Thanks,
>>>>> Ben
>>>>>
>>>>> --
>>>>> Ben Greear <greearb at candelatech.com>
>>>>> Candela Technologies Inc  http://www.candelatech.com
>>>>
>>>>
>>>>
>>>> --
>>>> Dave Täht
>>>> Let's go make home routers and wifi faster! With better software!
>>>> http://blog.cerowrt.org
>>>> _______________________________________________
>>>> Make-wifi-fast mailing list
>>>> Make-wifi-fast at lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/make-wifi-fast
>>>>
>>>
>>>
>>> _______________________________________________
>>> Make-wifi-fast mailing list
>>> Make-wifi-fast at lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/make-wifi-fast
>
>
>
> --
> Dave Täht
> Let's go make home routers and wifi faster! With better software!
> http://blog.cerowrt.org



More information about the ath10k mailing list