i.MX8MM Ethernet TX Bandwidth Fluctuations
Joakim Zhang
qiangqing.zhang at nxp.com
Wed May 19 00:49:45 PDT 2021
Hi Frieder,
> -----Original Message-----
> From: Frieder Schrempf <frieder.schrempf at kontron.de>
> Sent: 2021年5月18日 20:55
> To: Joakim Zhang <qiangqing.zhang at nxp.com>; Dave Taht
> <dave.taht at gmail.com>
> Cc: dl-linux-imx <linux-imx at nxp.com>; netdev at vger.kernel.org;
> linux-arm-kernel at lists.infradead.org
> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
>
>
>
> On 18.05.21 14:35, Joakim Zhang wrote:
> >
> > Hi Dave,
> >
> >> -----Original Message-----
> >> From: Dave Taht <dave.taht at gmail.com>
> >> Sent: 2021年5月17日 20:48
> >> To: Joakim Zhang <qiangqing.zhang at nxp.com>
> >> Cc: Frieder Schrempf <frieder.schrempf at kontron.de>; dl-linux-imx
> >> <linux-imx at nxp.com>; netdev at vger.kernel.org;
> >> linux-arm-kernel at lists.infradead.org
> >> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
> >>
> >> On Mon, May 17, 2021 at 3:25 AM Joakim Zhang
> >> <qiangqing.zhang at nxp.com>
> >> wrote:
> >>>
> >>>
> >>> Hi Frieder,
> >>>
> >>>> -----Original Message-----
> >>>> From: Frieder Schrempf <frieder.schrempf at kontron.de>
> >>>> Sent: 2021年5月17日 15:17
> >>>> To: Joakim Zhang <qiangqing.zhang at nxp.com>; dl-linux-imx
> >>>> <linux-imx at nxp.com>; netdev at vger.kernel.org;
> >>>> linux-arm-kernel at lists.infradead.org
> >>>> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
> >>>>
> >>>> Hi Joakim,
> >>>>
> >>>> On 13.05.21 14:36, Joakim Zhang wrote:
> >>>>>
> >>>>> Hi Frieder,
> >>>>>
> >>>>> For NXP release kernel, I tested on i.MX8MQ/MM/MP, I can reproduce
> >>>>> on
> >>>> L5.10, and can't reproduce on L5.4.
> >>>>> According to your description, you can reproduce this issue both
> >>>>> L5.4 and
> >>>> L5.10? So I need confirm with you.
> >>>>
> >>>> Thanks for looking into this. I could reproduce this on 5.4 and
> >>>> 5.10 but both kernels were official mainline kernels and **not**
> >>>> from the linux-imx downstream tree.
> >>> Ok.
> >>>
> >>>> Maybe there is some problem in the mainline tree and it got
> >>>> included in the NXP release kernel starting from L5.10?
> >>> No, this much looks like a known issue, it should always exist after
> >>> adding
> >> AVB support in mainline.
> >>>
> >>> ENET IP is not a _real_ multiple queues per my understanding, queue
> >>> 0 is for
> >> best effort. And the queue 1&2 is for AVB stream whose default
> >> bandwidth fraction is 0.5 in driver. (i.e. 50Mbps for 100Mbps and 500Mbps
> for 1Gbps).
> >> When transmitting packets, net core will select queues randomly,
> >> which caused the tx bandwidth fluctuations. So you can change to use
> >> single queue if you care more about tx bandwidth. Or you can refer to
> >> NXP internal implementation.
> >>> e.g.
> >>> --- a/arch/arm64/boot/dts/freescale/imx8mq.dtsi
> >>> +++ b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
> >>> @@ -916,8 +916,8 @@
> >>> <&clk
> >> IMX8MQ_CLK_ENET_PHY_REF>;
> >>> clock-names = "ipg", "ahb", "ptp",
> >>> "enet_clk_ref",
> >> "enet_out";
> >>> - fsl,num-tx-queues = <3>;
> >>> - fsl,num-rx-queues = <3>;
> >>> + fsl,num-tx-queues = <1>;
> >>> + fsl,num-rx-queues = <1>;
> >>> status = "disabled";
> >>> };
> >>> };
> >>>
> >>> I hope this can help you :)
> >>
> >> Patching out the queues is probably not the right thing.
> >>
> >> for starters... Is there BQL support in this driver? It would be
> >> helpful to have on all queues.
> > There is no BQL support in this driver, and BQL may improve throughput
> further, but should not be the root cause of this reported issue.
> >
> >> Also if there was a way to present it as two interfaces, rather than
> >> one, that would allow for a specific avb device to be presented.
> >>
> >> Or:
> >>
> >> Is there a standard means of signalling down the stack via the IP layer (a
> dscp?
> >> a setsockopt?) that the AVB queue is requested?
> >>
> > AFAIK, AVB is scope of VLAN, so we can queue AVB packets into queue 1&2
> based on VLAN-ID.
>
> I had to look up what AVB even means, but from my current understanding it
> doesn't seem right that for non-AVB packets the driver picks any of the three
> queues in a random fashion while at the same time knowing that queue 1 and 2
> have a 50% limitation on the bandwidth. Shouldn't there be some way to prefer
> queue 0 without needing the user to set it up or even arbitrarily limiting the
> number of queues as proposed above?
Yes, I think we can. I look into NXP local implementation, there is a ndo_select_queue callback.
https://source.codeaurora.org/external/imx/linux-imx/tree/drivers/net/ethernet/freescale/fec_main.c?h=lf-5.4.y#n3419
This is the version for L5.4 kernel.
Best Regards,
Joakim Zhang
> >
> > Best Regards,
> > Joakim Zhang
> >>> Best Regards,
> >>> Joakim Zhang
> >>>> Best regards
> >>>> Frieder
> >>>>
> >>>>>
> >>>>> Best Regards,
> >>>>> Joakim Zhang
> >>>>>
> >>>>>> -----Original Message-----
> >>>>>> From: Joakim Zhang <qiangqing.zhang at nxp.com>
> >>>>>> Sent: 2021年5月12日 19:59
> >>>>>> To: Frieder Schrempf <frieder.schrempf at kontron.de>; dl-linux-imx
> >>>>>> <linux-imx at nxp.com>; netdev at vger.kernel.org;
> >>>>>> linux-arm-kernel at lists.infradead.org
> >>>>>> Subject: RE: i.MX8MM Ethernet TX Bandwidth Fluctuations
> >>>>>>
> >>>>>>
> >>>>>> Hi Frieder,
> >>>>>>
> >>>>>> Sorry, I missed this mail before, I can reproduce this issue at
> >>>>>> my side, I will try my best to look into this issue.
> >>>>>>
> >>>>>> Best Regards,
> >>>>>> Joakim Zhang
> >>>>>>
> >>>>>>> -----Original Message-----
> >>>>>>> From: Frieder Schrempf <frieder.schrempf at kontron.de>
> >>>>>>> Sent: 2021年5月6日 22:46
> >>>>>>> To: dl-linux-imx <linux-imx at nxp.com>; netdev at vger.kernel.org;
> >>>>>>> linux-arm-kernel at lists.infradead.org
> >>>>>>> Subject: i.MX8MM Ethernet TX Bandwidth Fluctuations
> >>>>>>>
> >>>>>>> Hi,
> >>>>>>>
> >>>>>>> we observed some weird phenomenon with the Ethernet on our
> >>>>>>> i.MX8M-Mini boards. It happens quite often that the measured
> >>>>>>> bandwidth in TX direction drops from its expected/nominal value
> >>>>>>> to something like 50% (for 100M) or ~67% (for 1G) connections.
> >>>>>>>
> >>>>>>> So far we reproduced this with two different hardware designs
> >>>>>>> using two different PHYs (RGMII VSC8531 and RMII KSZ8081), two
> >>>>>>> different kernel versions (v5.4 and v5.10) and link speeds of
> >>>>>>> 100M and
> >> 1G.
> >>>>>>>
> >>>>>>> To measure the throughput we simply run iperf3 on the target
> >>>>>>> (with a short p2p connection to the host PC) like this:
> >>>>>>>
> >>>>>>> iperf3 -c 192.168.1.10 --bidir
> >>>>>>>
> >>>>>>> But even something more simple like this can be used to get the
> >>>>>>> info (with 'nc -l -p 1122 > /dev/null' running on the host):
> >>>>>>>
> >>>>>>> dd if=/dev/zero bs=10M count=1 | nc 192.168.1.10 1122
> >>>>>>>
> >>>>>>> The results fluctuate between each test run and are sometimes 'good'
> >>>> (e.g.
> >>>>>>> ~90 MBit/s for 100M link) and sometimes 'bad' (e.g. ~45 MBit/s
> >>>>>>> for 100M
> >>>>>> link).
> >>>>>>> There is nothing else running on the system in parallel. Some
> >>>>>>> more info is also available in this post: [1].
> >>>>>>>
> >>>>>>> If there's anyone around who has an idea on what might be the
> >>>>>>> reason for this, please let me know!
> >>>>>>> Or maybe someone would be willing to do a quick test on his own
> >>>> hardware.
> >>>>>>> That would also be highly appreciated!
> >>>>>>>
> >>>>>>> Thanks and best regards
> >>>>>>> Frieder
> >>>>>>>
> >>>>>>> [1]:
> >>>>>>> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%
> >>>>>>> 2Fco
> >>>>>>> mm
> >>>>>>> u
> >>>>>>>
> >>>>>>
> >>>>
> >>
> nity.nxp.com%2Ft5%2Fi-MX-Processors%2Fi-MX8MM-Ethernet-TX-Bandwidth-
> >>>>>>>
> >>>>>>
> >>>>
> >>
> Fluctuations%2Fm-p%2F1242467%23M170563&data=04%7C01%7Cqiang
> >>>>>>>
> >>>>>>
> >>>>
> >>
> qing.zhang%40nxp.com%7C5d4866d4565e4cbc36a008d9109da0ff%7C686ea1d
> >>>>>>>
> >>>>>>
> >>>>
> >>
> 3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C637559091463792932%7CUnkno
> >>>>>>>
> >>>>>>
> >>>>
> >>
> wn%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1ha
> >>>>>>>
> >>>>>>
> >>>>
> >>
> WwiLCJXVCI6Mn0%3D%7C1000&sdata=ygcThQOLIzp0lzhXacRLjSjnjm1FEj
> >>>>>>> YSxakXwZtxde8%3D&reserved=0
> >>
> >>
> >>
> >> --
> >> Latest Podcast:
> >> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww
> >> .lin
> >>
> kedin.com%2Ffeed%2Fupdate%2Furn%3Ali%3Aactivity%3A6791014284936785
> >>
> 920%2F&data=04%7C01%7Cqiangqing.zhang%40nxp.com%7Cd11b7b331
> >>
> db04c41799908d91932059b%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%
> >>
> 7C0%7C637568524896127548%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4w
> >>
> LjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&am
> >>
> p;sdata=IPW1MPLSnitX0HUttdLtZysknzokRN5fYVPXrbJQhaY%3D&reserve
> >> d=0
> >>
> >> Dave Täht CTO, TekLibre, LLC
More information about the linux-arm-kernel
mailing list