i.MX8MM Ethernet TX Bandwidth Fluctuations

Joakim Zhang qiangqing.zhang at nxp.com
Wed May 19 03:47:18 PDT 2021


> -----Original Message-----
> From: Frieder Schrempf <frieder.schrempf at kontron.de>
> Sent: 2021年5月19日 18:12
> To: Joakim Zhang <qiangqing.zhang at nxp.com>; Dave Taht
> <dave.taht at gmail.com>
> Cc: dl-linux-imx <linux-imx at nxp.com>; netdev at vger.kernel.org;
> linux-arm-kernel at lists.infradead.org
> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
> 
> On 19.05.21 10:40, Joakim Zhang wrote:
> >
> > Hi Frieder,
> >
> >> -----Original Message-----
> >> From: Frieder Schrempf <frieder.schrempf at kontron.de>
> >> Sent: 2021年5月19日 16:10
> >> To: Joakim Zhang <qiangqing.zhang at nxp.com>; Dave Taht
> >> <dave.taht at gmail.com>
> >> Cc: dl-linux-imx <linux-imx at nxp.com>; netdev at vger.kernel.org;
> >> linux-arm-kernel at lists.infradead.org
> >> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
> >>
> >> Hi Joakim,
> >>
> >> On 19.05.21 09:49, Joakim Zhang wrote:
> >>>
> >>> Hi Frieder,
> >>>
> >>>> -----Original Message-----
> >>>> From: Frieder Schrempf <frieder.schrempf at kontron.de>
> >>>> Sent: 2021年5月18日 20:55
> >>>> To: Joakim Zhang <qiangqing.zhang at nxp.com>; Dave Taht
> >>>> <dave.taht at gmail.com>
> >>>> Cc: dl-linux-imx <linux-imx at nxp.com>; netdev at vger.kernel.org;
> >>>> linux-arm-kernel at lists.infradead.org
> >>>> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
> >>>>
> >>>>
> >>>>
> >>>> On 18.05.21 14:35, Joakim Zhang wrote:
> >>>>>
> >>>>> Hi Dave,
> >>>>>
> >>>>>> -----Original Message-----
> >>>>>> From: Dave Taht <dave.taht at gmail.com>
> >>>>>> Sent: 2021年5月17日 20:48
> >>>>>> To: Joakim Zhang <qiangqing.zhang at nxp.com>
> >>>>>> Cc: Frieder Schrempf <frieder.schrempf at kontron.de>; dl-linux-imx
> >>>>>> <linux-imx at nxp.com>; netdev at vger.kernel.org;
> >>>>>> linux-arm-kernel at lists.infradead.org
> >>>>>> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
> >>>>>>
> >>>>>> On Mon, May 17, 2021 at 3:25 AM Joakim Zhang
> >>>>>> <qiangqing.zhang at nxp.com>
> >>>>>> wrote:
> >>>>>>>
> >>>>>>>
> >>>>>>> Hi Frieder,
> >>>>>>>
> >>>>>>>> -----Original Message-----
> >>>>>>>> From: Frieder Schrempf <frieder.schrempf at kontron.de>
> >>>>>>>> Sent: 2021年5月17日 15:17
> >>>>>>>> To: Joakim Zhang <qiangqing.zhang at nxp.com>; dl-linux-imx
> >>>>>>>> <linux-imx at nxp.com>; netdev at vger.kernel.org;
> >>>>>>>> linux-arm-kernel at lists.infradead.org
> >>>>>>>> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
> >>>>>>>>
> >>>>>>>> Hi Joakim,
> >>>>>>>>
> >>>>>>>> On 13.05.21 14:36, Joakim Zhang wrote:
> >>>>>>>>>
> >>>>>>>>> Hi Frieder,
> >>>>>>>>>
> >>>>>>>>> For NXP release kernel, I tested on i.MX8MQ/MM/MP, I can
> >>>>>>>>> reproduce on
> >>>>>>>> L5.10, and can't reproduce on L5.4.
> >>>>>>>>> According to your description, you can reproduce this issue
> >>>>>>>>> both
> >>>>>>>>> L5.4 and
> >>>>>>>> L5.10? So I need confirm with you.
> >>>>>>>>
> >>>>>>>> Thanks for looking into this. I could reproduce this on 5.4 and
> >>>>>>>> 5.10 but both kernels were official mainline kernels and
> >>>>>>>> **not** from the linux-imx downstream tree.
> >>>>>>> Ok.
> >>>>>>>
> >>>>>>>> Maybe there is some problem in the mainline tree and it got
> >>>>>>>> included in the NXP release kernel starting from L5.10?
> >>>>>>> No, this much looks like a known issue, it should always exist
> >>>>>>> after adding
> >>>>>> AVB support in mainline.
> >>>>>>>
> >>>>>>> ENET IP is not a _real_ multiple queues per my understanding,
> >>>>>>> queue
> >>>>>>> 0 is for
> >>>>>> best effort. And the queue 1&2 is for AVB stream whose default
> >>>>>> bandwidth fraction is 0.5 in driver. (i.e. 50Mbps for 100Mbps and
> >>>>>> 500Mbps
> >>>> for 1Gbps).
> >>>>>> When transmitting packets, net core will select queues randomly,
> >>>>>> which caused the tx bandwidth fluctuations. So you can change to
> >>>>>> use single queue if you care more about tx bandwidth. Or you can
> >>>>>> refer to NXP internal implementation.
> >>>>>>> e.g.
> >>>>>>> --- a/arch/arm64/boot/dts/freescale/imx8mq.dtsi
> >>>>>>> +++ b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
> >>>>>>> @@ -916,8 +916,8 @@
> >>>>>>>                                          <&clk
> >>>>>> IMX8MQ_CLK_ENET_PHY_REF>;
> >>>>>>>                                 clock-names = "ipg", "ahb",
> >> "ptp",
> >>>>>>>
> >> "enet_clk_ref",
> >>>>>> "enet_out";
> >>>>>>> -                               fsl,num-tx-queues = <3>;
> >>>>>>> -                               fsl,num-rx-queues = <3>;
> >>>>>>> +                               fsl,num-tx-queues = <1>;
> >>>>>>> +                               fsl,num-rx-queues = <1>;
> >>>>>>>                                 status = "disabled";
> >>>>>>>                         };
> >>>>>>>                 };
> >>>>>>>
> >>>>>>> I hope this can help you :)
> >>>>>>
> >>>>>> Patching out the queues is probably not the right thing.
> >>>>>>
> >>>>>> for starters... Is there BQL support in this driver? It would be
> >>>>>> helpful to have on all queues.
> >>>>> There is no BQL support in this driver, and BQL may improve
> >>>>> throughput
> >>>> further, but should not be the root cause of this reported issue.
> >>>>>
> >>>>>> Also if there was a way to present it as two interfaces, rather
> >>>>>> than one, that would allow for a specific avb device to be presented.
> >>>>>>
> >>>>>> Or:
> >>>>>>
> >>>>>> Is there a standard means of signalling down the stack via the IP
> >>>>>> layer (a
> >>>> dscp?
> >>>>>> a setsockopt?) that the AVB queue is requested?
> >>>>>>
> >>>>> AFAIK, AVB is scope of VLAN, so we can queue AVB packets into
> >>>>> queue
> >>>>> 1&2
> >>>> based on VLAN-ID.
> >>>>
> >>>> I had to look up what AVB even means, but from my current
> >>>> understanding it doesn't seem right that for non-AVB packets the
> >>>> driver picks any of the three queues in a random fashion while at
> >>>> the same time knowing that queue 1 and 2 have a 50% limitation on
> >>>> the bandwidth. Shouldn't there be some way to prefer queue 0
> >>>> without needing the user to set it up or even arbitrarily limiting
> >>>> the number of
> >> queues as proposed above?
> >>>
> >>> Yes, I think we can. I look into NXP local implementation, there is
> >>> a
> >> ndo_select_queue callback.
> >>> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fso
> >>> ur
> >>>
> >>
> ce.codeaurora.org%2Fexternal%2Fimx%2Flinux-imx%2Ftree%2Fdrivers%2Fnet
> >> %
> >>>
> >>
> 2Fethernet%2Ffreescale%2Ffec_main.c%3Fh%3Dlf-5.4.y%23n3419&data=
> >> 04
> >>> %7C01%7Cqiangqing.zhang%40nxp.com%7Cd83917f3c76c4b6ef80008d91
> a9
> >> d8a28%7
> >>>
> >>
> C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C637570086193978287%
> >> 7CUnkno
> >>>
> >>
> wn%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1ha
> >> WwiL
> >>>
> >>
> CJXVCI6Mn0%3D%7C1000&sdata=pQuGAadGzM8GhYsVl%2FG%2BPJSCZ
> >> RRvbwhuLy9
> >>> g30bn3ok%3D&reserved=0
> >>> This is the version for L5.4 kernel.
> >>
> >> Yes, this looks like it could solve the issue. Would you mind
> >> preparing a patch to upstream the change in [1]? I would be happy to
> >> test (at least the non-AVB
> >> case) and review.
> >
> > Yes, I can have a try. I saw this patch has been staying in downstream tree
> for many years, and I don't know the history.
> > Anyway, I will try to upstream first to see if anyone has comments.
> 
> Thanks, that would be great. Please put me on cc if you send the patch.
Sure :-)

Best Regards,
Joakim Zhang
> Just for the record:
> 
> When I set fsl,num-tx-queues = <1>, I do see that the bandwidth-drops don't
> occur anymore. When I instead apply the queue selection patch from the
> downstream kernel, I also see that queue 0 is always picked for my untagged
> traffic. In both cases bandwidth stays just as high as expected (> 900 Mbit/s).


More information about the linux-arm-kernel mailing list