[PATCH 4/5] nvmet-rdma: add a NVMe over Fabrics RDMA target driver
Steve Wise
swise at opengridcomputing.com
Thu Jun 9 16:03:51 PDT 2016
<snip>
> > +
> > +static int nvmet_rdma_cm_handler(struct rdma_cm_id *cm_id,
> > + struct rdma_cm_event *event)
> > +{
> > + struct nvmet_rdma_queue *queue = NULL;
> > + int ret = 0;
> > +
> > + if (cm_id->qp)
> > + queue = cm_id->qp->qp_context;
> > +
> > + pr_debug("%s (%d): status %d id %p\n",
> > + rdma_event_msg(event->event), event->event,
> > + event->status, cm_id);
> > +
> > + switch (event->event) {
> > + case RDMA_CM_EVENT_CONNECT_REQUEST:
> > + ret = nvmet_rdma_queue_connect(cm_id, event);
The above nvmet cm event handler, nvmet_rdma_cm_handler(), calls
nvmet_rdma_queue_connect() for CONNECT_REQUEST events, which calls
nvmet_rdma_alloc_queue (), which, if it encounters a failure (like creating
the qp), calls nvmet_rdma_cm_reject () which calls rdma_reject(). The
non-zero error, however, gets returned back here and this function returns
the error to the RDMA_CM which will also reject the connection as well as
destroy the cm_id. So there are two rejects happening, I think. Either
nvmet should reject and destroy the cm_id, or it should do neither and
return non-zero to the RDMA_CM to reject/destroy.
Steve.
More information about the Linux-nvme
mailing list