Optimizing kernel compilation / alignments for network performance

Arnd Bergmann arnd at arndb.de
Fri May 6 01:45:29 PDT 2022


On Fri, May 6, 2022 at 9:44 AM Rafał Miłecki <zajec5 at gmail.com> wrote:
>
> On 5.05.2022 18:04, Andrew Lunn wrote:
> >> you'll see that most used functions are:
> >> v7_dma_inv_range
> >> __irqentry_text_end
> >> l2c210_inv_range
> >> v7_dma_clean_range
> >> bcma_host_soc_read32
> >> __netif_receive_skb_core
> >> arch_cpu_idle
> >> l2c210_clean_range
> >> fib_table_lookup
> >
> > There is a lot of cache management functions here.

Indeed, so optimizing the coherency management (see Felix' reply)
is likely to help most in making the driver faster, but that does not
explain why the alignment of the object code has such a big impact
on performance.

To investigate the alignment further, what I was actually looking for
is a comparison of the profile of the slow and fast case. Here I would
expect that the slow case spends more time in one of the functions
that don't deal with cache management (maybe fib_table_lookup or
__netif_receive_skb_core).

A few other thoughts:

- bcma_host_soc_read32() is a fundamentally slow operation, maybe
  some of the calls can turned into a relaxed read, like the readback
  in bgmac_chip_intrs_off() or the 'poll again' at the end bgmac_poll(),
  though obviously not the one in bgmac_dma_rx_read().
  It may be possible to even avoid some of the reads entirely, checking
  for more data in bgmac_poll() may actually be counterproductive
  depending on the workload.

- The higher-end networking SoCs are usually cache-coherent and
  can avoid the cache management entirely. There is a slim chance
  that this chip is designed that way and it just needs to be enabled
  properly. Most low-end chips don't implement the coherent
  interconnect though, and I suppose you have checked this already.

- bgmac_dma_rx_update_index() and bgmac_dma_tx_add() appear
  to have an extraneous dma_wmb(), which should be implied by the
  non-relaxed writel() in bgmac_write().

- accesses to the DMA descriptor don't show up in the profile here,
  but look like they can get misoptimized by the compiler. I would
  generally use READ_ONCE() and WRITE_ONCE() for these to
  ensure that you don't end up with extra or out-of-order accesses.
  This also makes it clearer to the reader that something special
  happens here.

> > Might sound odd,
> > but have you tried disabling SMP? These cache functions need to
> > operate across all CPUs, and the communication between CPUs can slow
> > them down. If there is only one CPU, these cache functions get simpler
> > and faster.
> >
> > It just depends on your workload. If you have 1 CPU loaded to 100% and
> > the other 3 idle, you might see an improvement. If you actually need
> > more than one CPU, it will probably be worse.
>
> It seems to lower my NAT speed from ~362 Mb/s to 320 Mb/s but it feels
> more stable now (lower variations). Let me spend some time on more
> testing.
>
>
> FWIW during all my tests I was using:
> echo 2 > /sys/class/net/eth0/queues/rx-0/rps_cpus
> that is what I need to get similar speeds across iperf sessions
>
> With
> echo 0 > /sys/class/net/eth0/queues/rx-0/rps_cpus
> my NAT speeds were jumping between 4 speeds:
> 273 Mbps / 315 Mbps / 353 Mbps / 425 Mbps
> (every time I started iperf kernel jumped into one state and kept the
>   same iperf speed until stopping it and starting another session)
>
> With
> echo 1 > /sys/class/net/eth0/queues/rx-0/rps_cpus
> my NAT speeds were jumping between 2 speeds:
> 284 Mbps / 408 Mbps

Can you try using 'numactl -C' to pin the iperf processes to
a particular CPU core? This may be related to the locality of
the user process relative to where the interrupts end up.

        Arnd



More information about the linux-arm-kernel mailing list