[Codel] fq_codel_drop vs a udp flood

moeller0 moeller0 at gmx.de
Fri May 6 01:41:53 PDT 2016


Hi All,

> On May 5, 2016, at 21:41 , Dave Taht <dave.taht at gmail.com> wrote:
> 
> On Thu, May 5, 2016 at 12:23 PM, Eric Dumazet <eric.dumazet at gmail.com> wrote:
>> On Thu, 2016-05-05 at 19:25 +0300, Roman Yeryomin wrote:
>>> On 5 May 2016 at 19:12, Eric Dumazet <eric.dumazet at gmail.com> wrote:
>>>> […]
>> 
>> fq_codel has a default of 10240 packets and 1024 buckets.
>> 
>> http://lxr.free-electrons.com/source/net/sched/sch_fq_codel.c#L413
>> 
>> If someone changed that in the linux variant you use, he probably should
>> explain the rationale.
> 
> I guess that would be me.

	IIRC, I was making a a lot og noise back then as well.

> 
> Openwrt has long shipped with the fq_codel default outer queue limit
> being lower than the default (e.g. 1024). Think: itty bitty 32MB
> routers. 10240 packets can = boom, particuarly while there were 4
> fq_codel instances per wifi interface (and people in the habit of
> creating 2 or more wifi interfaces).

	In my case I could force a OOM reboot of my 64MB router by a “simple” unidirectional UDP flood with randomized port numbers; at 10240 packet limit that was eating approximately 20MB of the 64 MB ram of the device which made it go “boom”. I tried to convince people that bad queueing is not the most important concern under those conditions, staying up is rather more important… 


> 
> back then: I viewed the probability of flooding all 1024 queues as low
> and thus the queue depth would be sufficient for any given set of
> flows to do well. (and long ago we gave codel a probability of working
> on all queues). And did not do enough udp flood testing. :(

	I would argue that the main goal for behaviour under attack should be (IMHO) “staying alive” rather than unscheduled OOMs reboots/crashes. Keeping the lights on so to speak should be the first priority followed by trying to still maintain fairness guarantees.
> 
> Totally not the right answer, I know. And the problem is even worse
> now, with 128MB arm boxes like the armada 385 (linksys 1200ac, turris
> omnia) using software GRO to be bulking up 64k packets at gigE and
> trying to ship them to an isp at 5mbit, or over wifi at some rate
> lower than that.
> 
> cake switched to byte, rather than packet, accounting, for these
> reasons, and we're still trying various methods to peel apart
> superpackets at some load level efficiently.

	Speaking out of total ignorance, I ask why not account GRO/GSO packets by the number of their fragments against the packet limit? Counting a 64kB packets as equivalent to a 64B packet probably is the right thing if one tries to account for the work the OS needs to perform to figure out what to do with the packet, but for limiting the memory consumption it introduces an impressive/manly level of uncertainty (2 orders of magnitude). 


Best Regards
	Sebastian

> 
> And routers are tending to ship with a lot more memory these days,
> overall. We are discussing changing the sqm system to dynamically size
> the packet limit by overall memory limits here, for example:
> https://github.com/tohojo/sqm-scripts/issues/42
> 
> AND: As sorta now implemented in the mac80211 fq_codel code, it's per
> radio, rather than per interface (or was, when I last thought about
> it), which is *vastly saner* than four fq_codel instances for each
> SSID.
> 
>> 
>> 
>> 
> 
> 
> 
> -- 
> Dave Täht
> Let's go make home routers and wifi faster! With better software!
> http://blog.cerowrt.org
> _______________________________________________
> Codel mailing list
> Codel at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/codel




More information about the ath10k mailing list