[PATCH v2] nvme-fabrics: allow to queue requests for live queues
James Smart
james.smart at broadcom.com
Wed Jul 29 15:34:25 EDT 2020
On 7/29/2020 12:58 AM, Sagi Grimberg wrote:
> Right now we are failing requests based on the controller
> state (which is checked inline in nvmf_check_ready) however
> we should definitely accept requests if the queue is live.
>
> When entering controller reset, we transition the controller
> into NVME_CTRL_RESETTING, and then return BLK_STS_RESOURCE for
> non-mpath requests (have blk_noretry_request set).
>
> This is also the case for NVME_REQ_USER for the wrong reason.
> There shouldn't be any reason for us to reject this I/O in a
> controller reset. We do want to prevent passthru commands on
> the admin queue because we need the controller to fully initialize
> first before we let user passthru admin commands to be issued.
>
> In a non-mpath setup, this means that the requests will simply
> be requeued over and over forever not allowing the q_usage_counter
> to drop its final reference, causing controller reset to hang
> if running concurrently with heavy I/O.
I've been looking at this trying to understand the real issues.
First issue: even if the check_ready checks pass, there's nothing that
says the transport won't return BLK_STS_RESOURCE for it's own reasons.
Maybe Tcp/rdma don't, but FC does today for cases of lost connectivity -
and a loss of connectivity will cause a Resetting state transition.
I agree with
+ if (rq->q == ctrl->admin_q && (req->flags & NVME_REQ_USERCMD))
change as until we do have acceptance of a way to serialize/prioritize
Connect and initialization commands. We need to do this.
Which then leaves:
The patch isn't changing any behavior when ! q->live - which occurs in
the latter steps of Resetting as well as most of the Connecting state.
So any I/O received while !q->live is still getting queued/retried. So
what was the problem ? It has to be limited in scope to either the
start of the Resetting state while q->live had yet to transition to
!live. But I don't see any freezing in this path.
Looking at the original problem that lock ups are:
1) the sysfs write to do a reset - which is stalled waiting on a flush
of reset_work
2) reset_work is stalled on a call to blk_mq_update_nr_hw_queues() which
then does a blk_mq_freeze_queue_wait.
The 2nd one is odd in my mind as the the update_nr_hw_queues is
something done while connecting and I wouldn't think the reset flush
must wait for a reconnect as well. Although not the fix, I'd recommend
to not call nvme_tcp_setup_ctrl() from reset_work and instead have it
schedule nvme_tcp_reconnect_ctrl_work().
As tcp did change state to Connecting before making the
update_nr_hw_queues call, yet we didn't change the behavior of
rescheduling if Connecting and !q->live, what did the patch actually do
to help ? The patch only changed the queuing behavior for Connecting
and q->live. So whether or not we hit it is dependent on the timing of
when a new io is received vs the call to hr_hw_queues. The patch
didn't help this.
-- james
More information about the Linux-nvme
mailing list