[PATCH rfc 0/6] Automatic affinity settings for nvme over rdma
Max Gurtovoy
maxg at mellanox.com
Tue Apr 4 00:51:47 PDT 2017
>
> Any feedback is welcome.
Hi Sagi,
the patchset looks good and of course we can add support for more
drivers in the future.
have you run some performance testing with the nvmf initiator ?
>
> Sagi Grimberg (6):
> mlx5: convert to generic pci_alloc_irq_vectors
> mlx5: move affinity hints assignments to generic code
> RDMA/core: expose affinity mappings per completion vector
> mlx5: support ->get_vector_affinity
> block: Add rdma affinity based queue mapping helper
> nvme-rdma: use intelligent affinity based queue mappings
>
> block/Kconfig | 5 +
> block/Makefile | 1 +
> block/blk-mq-rdma.c | 56 +++++++++++
> drivers/infiniband/hw/mlx5/main.c | 10 ++
> drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 5 +-
> drivers/net/ethernet/mellanox/mlx5/core/eq.c | 9 +-
> drivers/net/ethernet/mellanox/mlx5/core/eswitch.c | 2 +-
> drivers/net/ethernet/mellanox/mlx5/core/health.c | 2 +-
> drivers/net/ethernet/mellanox/mlx5/core/main.c | 106 +++------------------
> .../net/ethernet/mellanox/mlx5/core/mlx5_core.h | 1 -
> drivers/nvme/host/rdma.c | 13 +++
> include/linux/blk-mq-rdma.h | 10 ++
> include/linux/mlx5/driver.h | 2 -
> include/rdma/ib_verbs.h | 24 +++++
> 14 files changed, 138 insertions(+), 108 deletions(-)
> create mode 100644 block/blk-mq-rdma.c
> create mode 100644 include/linux/blk-mq-rdma.h
>
More information about the Linux-nvme
mailing list