[PATCH v3 0/6] Fix mlx5 write combining support on new ARM64 cores

Jason Gunthorpe jgg at nvidia.com
Thu Apr 11 09:46:13 PDT 2024


mlx5 has a built in self-test at driver startup to evaluate if the
platform supports write combining to generate a 64 byte PCIe TLP or
not. This has proven necessary because a lot of common scenarios end up
with broken write combining (especially inside virtual machines) and there
is no other way to learn this information.

This self test has been consistently failing on new ARM64 CPU
designs (specifically with NVIDIA Grace's implementation of Neoverse
V2). The C loop around writeq() generates some pretty terrible ARM64
assembly, but historically this has worked on alot of existing ARM64 CPUs
till now.

We see it succeed about 1 time in 10,000 on the worst affected
systems. The CPU architects speculate that the load instructions
interspersed with the stores make the test unreliable.

Arrange things so that the ARM64 uses a predictable inline assembly block
of 8 STR instructions.

Catalin suggested implementing this in terms of the obscure
__iowrite64_copy() interface which was long ago added to optimize write
combining stores on Pathscale RDMA HW for x86. These copy routines have
the advantage of requiring the caller to supply alignment which allows an
optimal assembly implementation.

This is a good suggestion because it turns out that S390 has much the same
problem and already uses the __iowrite64_copy() to try to make its WC
operations work.

The first several patches modernize and improve the performance of
__iowriteXX_copy() so that an ARM64 implementation can be provided which
relies on __builtin_constant_p to generate fast inlined assembly code in a
few common cases.

It looks ack'd enough now so I plan to take this through the RDMA tree.

v3:
 - Rebase to 6.9-rc3
 - Fix copy&pasteo in __const_memcpy_toio_aligned64() to use__raw_writeq()
v2: https://lore.kernel.org/r/0-v1-38290193eace+5-mlx5_arm_wc_jgg@nvidia.com
 - Rework everything to use __iowrite64_copy().
 - Don't use STP since that is not reliably supported in ARM VMs
 - New patches to tidy up __iowriteXX_copy() on x86 and s390
v1: https://lore.kernel.org/r/cover.1700766072.git.leon@kernel.org

Jason Gunthorpe (6):
  x86: Stop using weak symbols for __iowrite32_copy()
  s390: Implement __iowrite32_copy()
  s390: Stop using weak symbols for __iowrite64_copy()
  arm64/io: Provide a WC friendly __iowriteXX_copy()
  net: hns3: Remove io_stop_wc() calls after __iowrite64_copy()
  IB/mlx5: Use __iowrite64_copy() for write combining stores

 arch/arm64/include/asm/io.h                   | 132 ++++++++++++++++++
 arch/arm64/kernel/io.c                        |  42 ++++++
 arch/s390/include/asm/io.h                    |  15 ++
 arch/s390/pci/pci.c                           |   6 -
 arch/x86/include/asm/io.h                     |  17 +++
 arch/x86/lib/Makefile                         |   1 -
 arch/x86/lib/iomap_copy_64.S                  |  15 --
 drivers/infiniband/hw/mlx5/mem.c              |   8 +-
 .../net/ethernet/hisilicon/hns3/hns3_enet.c   |   4 -
 include/linux/io.h                            |   8 +-
 lib/iomap_copy.c                              |  13 +-
 11 files changed, 222 insertions(+), 39 deletions(-)
 delete mode 100644 arch/x86/lib/iomap_copy_64.S


base-commit: fec50db7033ea478773b159e0e2efb135270e3b7
-- 
2.43.2




More information about the linux-arm-kernel mailing list