[Bug Report] PCIe errinject and hot-unplug causes nvme driver hang
Keith Busch
kbusch at kernel.org
Wed Apr 24 10:36:02 PDT 2024
On Tue, Apr 23, 2024 at 03:22:46PM +0530, Nilay Shroff wrote:
> >
> I tested the above patch, however, it doesn't help to solve the issue.
> I tested it for two cases listed below:
>
> 1. Platform which doesn't support pci-error-recovery:
> -----------------------------------------------------
> On this platform when nvme_timeout() is invoked, it falls through
> nvme_shoud_reset()
> -> nvme_warn_reset()
> -> goto disable
>
> When nvme_timeout() jumps to the label disable, it tries setting the
> controller state to RESETTING but that couldn't succeed because the
> (logical) hot-unplug/nvme_remove() of the disk is started on another
> thread and hence controller state has already changed to
> DELETING/DELETING_NOIO. As nvme_timeout() couldn't set the controller
> state to RESETTING, nvme_timeout() returns BLK_EH_DONE. In summary,
> as nvme_timeout() couldn't cancel pending IO, the hot-unplug/nvme_remove()
> couldn't forward progress and it keeps waiting for request queue to be freezed.
>
> 2. Platform supporting pci-error-recovery:
> ------------------------------------------
> Similarly, on this platform as explained for the above case, when
> nvme_timeout() is invoked, it falls through nvme_shoud_reset()
> -> nvme_warn_reset() -> goto disable. In this case as well,
> nvme_timeout() returns BLK_EH_DONE. Please note that though this
> platform supports pci-error-recovery, we couldn't get through
> nvme_error_detected() because the pci-error-recovery thread is pending
> on acquiring mutex "pci_lock_rescan_remove". This mutex is acquired by
> hot-unplug thread before it invokes nvme_remove() and nvme_remove()
> is currently waiting for request queue to be freezed. For reference,
> I have already captured the task hang traces in previous email of this
> thread where we could observe these hangs (for both pci-error-recovery
> thread as well as hot-unplig/nvme_remove()).
>
> I understand that we don't want to cancel pending IO from the nvme_remove()
> unconditionally as if the disk is not physically hot-unplug then we still
> want to wait for the in-flight IO to be finished. Also looking through
> the above cases, I think that the nvme_timeout() might be the code path
> from where we want to cancel in-flight/pending IO if controller is
> in the terminal state (i.e. DELETING or DELETING_NOIO). Keeping this idea in
> mind, I have worked out the below patch:
>
> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> index 8e0bb9692685..e45a54d84649 100644
> --- a/drivers/nvme/host/pci.c
> +++ b/drivers/nvme/host/pci.c
> @@ -1286,6 +1286,9 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req)
> u32 csts = readl(dev->bar + NVME_REG_CSTS);
> u8 opcode;
>
> + if (nvme_state_terminal(&dev->ctrl))
> + goto disable;
> +
> /* If PCI error recovery process is happening, we cannot reset or
> * the recovery mechanism will surely fail.
> */
> @@ -1390,8 +1393,13 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req)
> return BLK_EH_RESET_TIMER;
>
> disable:
> - if (!nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_RESETTING))
> + if (!nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_RESETTING)) {
> + if (nvme_state_terminal(&dev->ctrl)) {
> + nvme_dev_disable(dev, false);
> + nvme_sync_queues(&dev->ctrl);
> + }
> return BLK_EH_DONE;
> + }
>
> nvme_dev_disable(dev, false);
> if (nvme_try_sched_reset(&dev->ctrl))
>
> I have tested the above patch against all possible cases. Please let me know
> if this looks good or if there are any further comments.
This looks okay to me. Just a couple things:
Set nvme_dev_disable's "shutdown" parameter to "true" since we're
restarting the queues again from this state.
Remove "nvme_sync_queues()". I think that would deadlock: sync_queues
waits for the timeout work to complete, but your calling it within the
timeout work, so this would have it wait for itself.
More information about the Linux-nvme
mailing list