[PATCH] nvme-fabrics: allow to queue requests for live queues
James Smart
james.smart at broadcom.com
Tue Jul 28 13:11:49 EDT 2020
On 7/27/2020 11:49 PM, Sagi Grimberg wrote:
>
>>> diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
>>> index 4ec4829d6233..2e7838f42e36 100644
>>> --- a/drivers/nvme/host/fabrics.c
>>> +++ b/drivers/nvme/host/fabrics.c
>>> @@ -564,21 +564,13 @@ bool __nvmf_check_ready(struct nvme_ctrl
>>> *ctrl, struct request *rq,
>>> {
>>> struct nvme_request *req = nvme_req(rq);
>>> - /*
>>> - * If we are in some state of setup or teardown only allow
>>> - * internally generated commands.
>>> - */
>>> - if (!blk_rq_is_passthrough(rq) || (req->flags & NVME_REQ_USERCMD))
>> "if (!blk_rq_is_passthrough(rq))" should not delete. Because if we
>> delete,
>> the normal io will be send to target, the target can not treat the io
>> if the queue is not NVME_CTRL_LIVE.
>
> Sure it does, the only reason for us to deny this I/O, is if the queue
> is not live. The controller state should only _advise_ us if we need to
> look at the queue state.
I disagree strongly with removing the check on NVME_REQ_USERCMD. We've
seen cli ioctls going to the admin queue while we're in the middle of
doing controller initialization and it's has hosed the controller state
in some cases. We've seen commands issued before the controller is in
the proper state. The admin queue may be live - but we don't
necessarily want other io sneaking in.
As for the blk_rq_is_passthrough check - I guess I can see it being
based on the queue state, and the check looks ok (we should never see
!blk_rq_is_passthrough on the admin q).
But...
- I don't know why it was that important to change it. On the connecting
path, all you're doing is letting io start flowing before all the queues
have been created. Did you really need to start that much sooner ?
- But on the resetting path, or deleting cases, you've added a condition
now where the controller state was changed, but there was a delay before
the transport marked the queue live. It's common practice in the
transports to change state then schedule a work element to perform the
actual state change. Why would you want io to continue to flow during
that window ? This may bring out other problems we've avoided in the past.
-- james
More information about the Linux-nvme
mailing list