[PATCH] nvme-rdma: Fix early queue flags settings
Steve Wise
swise at opengridcomputing.com
Wed Sep 21 07:18:04 PDT 2016
> >
> > I can modify the change log, Christoph do you still want a
> > comment in the code?
>
> Honestly there more I look into this the less I'm happy with the patch.
> queue->flags is an atomic, and as the patch shows we can get
> nvme_rdma_init_queue caled on a queue that still has visibility in
> other threads So I think we really should not even do that simple
> queue->flags = 0 assignment at all. We'll need to use clear_bit to
> atomically clear anything that might be set, and we need to be careful
> where we do that. I think this whole situation that we can get an
> *_init_* function called on something that already is live and visible
> to other threads need to be well documented at least because it's just
> waiting for sucker like me that don't expect that.
Sagi, you originally proposed this in a patch for debugging the crash where
a request is accessing a queue with rdma resources freed:
@@ -542,11 +542,12 @@ static int nvme_rdma_create_queue_ib(struct
nvme_rdma_queue *queue,
goto out_destroy_qp;
}
set_bit(NVME_RDMA_IB_QUEUE_ALLOCATED, &queue->flags);
+ clear_bit(NVME_RDMA_Q_DELETING, &queue->flags);
return 0;
Perhaps this is how we should proceed?
More information about the Linux-nvme
mailing list