[PATCH] nvme-fabrics: allow to queue requests for live queues

Sagi Grimberg sagi at grimberg.me
Tue Jul 28 13:50:56 EDT 2020


>>>> diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
>>>> index 4ec4829d6233..2e7838f42e36 100644
>>>> --- a/drivers/nvme/host/fabrics.c
>>>> +++ b/drivers/nvme/host/fabrics.c
>>>> @@ -564,21 +564,13 @@ bool __nvmf_check_ready(struct nvme_ctrl 
>>>> *ctrl, struct request *rq,
>>>>   {
>>>>       struct nvme_request *req = nvme_req(rq);
>>>> -    /*
>>>> -     * If we are in some state of setup or teardown only allow
>>>> -     * internally generated commands.
>>>> -     */
>>>> -    if (!blk_rq_is_passthrough(rq) || (req->flags & NVME_REQ_USERCMD))
>>> "if (!blk_rq_is_passthrough(rq))" should not delete. Because if we 
>>> delete,
>>> the normal io will be send to target, the target can not treat the io
>>> if the queue is not NVME_CTRL_LIVE.
>>
>> Sure it does, the only reason for us to deny this I/O, is if the queue
>> is not live. The controller state should only _advise_ us if we need to
>> look at the queue state.
> 
> I disagree strongly with removing the check on NVME_REQ_USERCMD. We've 
> seen cli ioctls going to the admin queue while we're in the middle of 
> doing controller initialization and it's has hosed the controller state 
> in some cases. We've seen commands issued before the controller is in 
> the proper state.  The admin queue may be live - but we don't 
> necessarily want other io sneaking in.

Can you please give an example? NVME_REQ_USERCMD should not be any
different from any other type of I/O.

Also, do note that pci does allow to queue any type of command based
on the queue state only. fabrics should be slightly different because
we have the CONNECTING state where we want to let the connect command
only to be issued.


> As for the blk_rq_is_passthrough check - I guess I can see it being 
> based on the queue state, and the check looks ok  (we should never see 
> !blk_rq_is_passthrough on the admin q).
> But...
> - I don't know why it was that important to change it. On the connecting 
> path, all you're doing is letting io start flowing before all the queues 
> have been created.  Did you really need to start that much sooner ?

The issue is that controller in RESETTING state will have requests that
are being issued, and if we don't let it pass through, it will hang
around forever being requeued preventing queue freeze to complete.

See bug report from Krishnamraju Eraparaju.

> 
> - But on the resetting path, or deleting cases, you've added a condition 
> now where the controller state was changed, but there was a delay before 
> the transport marked the queue live. It's common practice in the 
> transports to change state then schedule a work element to perform the 
> actual state change.  Why would you want io to continue to flow during 
> that window ?   This may bring out other problems we've avoided in the 
> past.

What are you referring to? the change here? the controller reset must
allow requests that came in before we started queue freeze to pass,
otherwise freeze will never complete.



More information about the Linux-nvme mailing list