i.MX8MM Ethernet TX Bandwidth Fluctuations

Frieder Schrempf frieder.schrempf at kontron.de
Wed May 19 01:10:14 PDT 2021


Hi Joakim,

On 19.05.21 09:49, Joakim Zhang wrote:
> 
> Hi Frieder,
> 
>> -----Original Message-----
>> From: Frieder Schrempf <frieder.schrempf at kontron.de>
>> Sent: 2021年5月18日 20:55
>> To: Joakim Zhang <qiangqing.zhang at nxp.com>; Dave Taht
>> <dave.taht at gmail.com>
>> Cc: dl-linux-imx <linux-imx at nxp.com>; netdev at vger.kernel.org;
>> linux-arm-kernel at lists.infradead.org
>> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
>>
>>
>>
>> On 18.05.21 14:35, Joakim Zhang wrote:
>>>
>>> Hi Dave,
>>>
>>>> -----Original Message-----
>>>> From: Dave Taht <dave.taht at gmail.com>
>>>> Sent: 2021年5月17日 20:48
>>>> To: Joakim Zhang <qiangqing.zhang at nxp.com>
>>>> Cc: Frieder Schrempf <frieder.schrempf at kontron.de>; dl-linux-imx
>>>> <linux-imx at nxp.com>; netdev at vger.kernel.org;
>>>> linux-arm-kernel at lists.infradead.org
>>>> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
>>>>
>>>> On Mon, May 17, 2021 at 3:25 AM Joakim Zhang
>>>> <qiangqing.zhang at nxp.com>
>>>> wrote:
>>>>>
>>>>>
>>>>> Hi Frieder,
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Frieder Schrempf <frieder.schrempf at kontron.de>
>>>>>> Sent: 2021年5月17日 15:17
>>>>>> To: Joakim Zhang <qiangqing.zhang at nxp.com>; dl-linux-imx
>>>>>> <linux-imx at nxp.com>; netdev at vger.kernel.org;
>>>>>> linux-arm-kernel at lists.infradead.org
>>>>>> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
>>>>>>
>>>>>> Hi Joakim,
>>>>>>
>>>>>> On 13.05.21 14:36, Joakim Zhang wrote:
>>>>>>>
>>>>>>> Hi Frieder,
>>>>>>>
>>>>>>> For NXP release kernel, I tested on i.MX8MQ/MM/MP, I can reproduce
>>>>>>> on
>>>>>> L5.10, and can't reproduce on L5.4.
>>>>>>> According to your description, you can reproduce this issue both
>>>>>>> L5.4 and
>>>>>> L5.10? So I need confirm with you.
>>>>>>
>>>>>> Thanks for looking into this. I could reproduce this on 5.4 and
>>>>>> 5.10 but both kernels were official mainline kernels and **not**
>>>>>> from the linux-imx downstream tree.
>>>>> Ok.
>>>>>
>>>>>> Maybe there is some problem in the mainline tree and it got
>>>>>> included in the NXP release kernel starting from L5.10?
>>>>> No, this much looks like a known issue, it should always exist after
>>>>> adding
>>>> AVB support in mainline.
>>>>>
>>>>> ENET IP is not a _real_ multiple queues per my understanding, queue
>>>>> 0 is for
>>>> best effort. And the queue 1&2 is for AVB stream whose default
>>>> bandwidth fraction is 0.5 in driver. (i.e. 50Mbps for 100Mbps and 500Mbps
>> for 1Gbps).
>>>> When transmitting packets, net core will select queues randomly,
>>>> which caused the tx bandwidth fluctuations. So you can change to use
>>>> single queue if you care more about tx bandwidth. Or you can refer to
>>>> NXP internal implementation.
>>>>> e.g.
>>>>> --- a/arch/arm64/boot/dts/freescale/imx8mq.dtsi
>>>>> +++ b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
>>>>> @@ -916,8 +916,8 @@
>>>>>                                          <&clk
>>>> IMX8MQ_CLK_ENET_PHY_REF>;
>>>>>                                 clock-names = "ipg", "ahb", "ptp",
>>>>>                                               "enet_clk_ref",
>>>> "enet_out";
>>>>> -                               fsl,num-tx-queues = <3>;
>>>>> -                               fsl,num-rx-queues = <3>;
>>>>> +                               fsl,num-tx-queues = <1>;
>>>>> +                               fsl,num-rx-queues = <1>;
>>>>>                                 status = "disabled";
>>>>>                         };
>>>>>                 };
>>>>>
>>>>> I hope this can help you :)
>>>>
>>>> Patching out the queues is probably not the right thing.
>>>>
>>>> for starters... Is there BQL support in this driver? It would be
>>>> helpful to have on all queues.
>>> There is no BQL support in this driver, and BQL may improve throughput
>> further, but should not be the root cause of this reported issue.
>>>
>>>> Also if there was a way to present it as two interfaces, rather than
>>>> one, that would allow for a specific avb device to be presented.
>>>>
>>>> Or:
>>>>
>>>> Is there a standard means of signalling down the stack via the IP layer (a
>> dscp?
>>>> a setsockopt?) that the AVB queue is requested?
>>>>
>>> AFAIK, AVB is scope of VLAN, so we can queue AVB packets into queue 1&2
>> based on VLAN-ID.
>>
>> I had to look up what AVB even means, but from my current understanding it
>> doesn't seem right that for non-AVB packets the driver picks any of the three
>> queues in a random fashion while at the same time knowing that queue 1 and 2
>> have a 50% limitation on the bandwidth. Shouldn't there be some way to prefer
>> queue 0 without needing the user to set it up or even arbitrarily limiting the
>> number of queues as proposed above?
> 
> Yes, I think we can. I look into NXP local implementation, there is a ndo_select_queue callback.
> https://eur04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsource.codeaurora.org%2Fexternal%2Fimx%2Flinux-imx%2Ftree%2Fdrivers%2Fnet%2Fethernet%2Ffreescale%2Ffec_main.c%3Fh%3Dlf-5.4.y%23n3419&data=04%7C01%7Cfrieder.schrempf%40kontron.de%7Ce4a99819cb6e444f598f08d91a9aad39%7C8c9d3c973fd941c8a2b1646f3942daf1%7C0%7C0%7C637570073897801363%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=S3HNVF8acDLJUJR89W1oI%2F28eTJhe18209l70eqVvXQ%3D&reserved=0
> This is the version for L5.4 kernel.

Yes, this looks like it could solve the issue. Would you mind preparing a patch to upstream the change in [1]? I would be happy to test (at least the non-AVB case) and review.

Thanks
Frieder

[1] https://source.codeaurora.org/external/imx/linux-imx/commit?id=8a7fe8f38b7e3b2f9a016dcf4b4e38bb941ac6df



More information about the linux-arm-kernel mailing list