SPDK initiators (Vmware 7.x) can not connect to nvmet-rdma.
Mark Ruijter
mruijter at primelogic.nl
Wed Sep 1 07:51:15 PDT 2021
Hi Sagi,
I am using VMware 7.x as initiator with RDMA.
The target system is running Ubuntu 20.04.3 LTS with kernel 5.11.22+.
The device that is exported is an LVM volume, however I also tested with a file backed loop device.
Connecting with SPDK seems to be the problem and as reported on the SPDK mailing-list it can be used to reproduce the issue when VMWare is not available.
./perf -q 64 -P 1 -s 4096 -w read -t 300 -c 0x1 -o 4096 -r 'trtype:RDMA adrfam:IPv4 traddr:169.254.85.8 trsvcid:4420'
This seems to produce a similar result:
nvme connect --transport=rdma --queue-size=1024 --nqn=testnqn_1 --traddr=169.254.85.8 --trsvcid=4420
I hope this helps,
--Mark
On 01/09/2021, 14:52, "Sagi Grimberg" <sagi at grimberg.me> wrote:
> When I connect an SPDK initiator it will try to connect using 1024 connections.
> The linux target is unable to handle this situation and return an error.
>
> Aug 28 14:22:56 crashme kernel: [169366.627010] infiniband mlx5_0: create_qp:2789:(pid 33755): Create QP type 2 failed
> Aug 28 14:22:56 crashme kernel: [169366.627913] nvmet_rdma: failed to create_qp ret= -12
> Aug 28 14:22:56 crashme kernel: [169366.628498] nvmet_rdma: nvmet_rdma_alloc_queue: creating RDMA queue failed (-12).
Seems that the target is trying to open a queue-pair that is larger than
supported, which device are you using?
More information about the Linux-nvme
mailing list