[PATCH v4 08/15] nvme: Implement cross-controller reset recovery
Hannes Reinecke
hare at suse.de
Mon Mar 30 03:50:24 PDT 2026
On 3/28/26 01:43, Mohamed Khalfella wrote:
> A host that has more than one path connecting to an nvme subsystem
> typically has an nvme controller associated with every path. This is
> mostly applicable to nvmeof. If one path goes down, inflight IOs on that
> path should not be retried immediately on another path because this
> could lead to data corruption as described in TP4129. TP8028 defines
> cross-controller reset mechanism that can be used by host to terminate
> IOs on the failed path using one of the remaining healthy paths. Only
> after IOs are terminated, or long enough time passes as defined by
> TP4129, inflight IOs should be retried on another path. Implement core
> cross-controller reset shared logic to be used by the transports.
>
> Signed-off-by: Mohamed Khalfella <mkhalfella at purestorage.com>
> ---
> drivers/nvme/host/constants.c | 1 +
> drivers/nvme/host/core.c | 145 ++++++++++++++++++++++++++++++++++
> drivers/nvme/host/nvme.h | 9 +++
> 3 files changed, 155 insertions(+)
>
> diff --git a/drivers/nvme/host/constants.c b/drivers/nvme/host/constants.c
> index dc90df9e13a2..f679efd5110e 100644
> --- a/drivers/nvme/host/constants.c
> +++ b/drivers/nvme/host/constants.c
> @@ -46,6 +46,7 @@ static const char * const nvme_admin_ops[] = {
> [nvme_admin_virtual_mgmt] = "Virtual Management",
> [nvme_admin_nvme_mi_send] = "NVMe Send MI",
> [nvme_admin_nvme_mi_recv] = "NVMe Receive MI",
> + [nvme_admin_cross_ctrl_reset] = "Cross Controller Reset",
> [nvme_admin_dbbuf] = "Doorbell Buffer Config",
> [nvme_admin_format_nvm] = "Format NVM",
> [nvme_admin_security_send] = "Security Send",
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 824a1193bec8..5603ae36444f 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -554,6 +554,150 @@ void nvme_cancel_admin_tagset(struct nvme_ctrl *ctrl)
> }
> EXPORT_SYMBOL_GPL(nvme_cancel_admin_tagset);
>
> +static struct nvme_ctrl *nvme_find_ctrl_ccr(struct nvme_ctrl *ictrl,
> + u32 min_cntlid)
> +{
> + struct nvme_subsystem *subsys = ictrl->subsys;
> + struct nvme_ctrl *ctrl, *sctrl = NULL;
> + unsigned long flags;
> +
> + mutex_lock(&nvme_subsystems_lock);
> + list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry) {
> + if (ctrl->cntlid < min_cntlid)
> + continue;
> +
> + if (atomic_dec_if_positive(&ctrl->ccr_limit) < 0)
> + continue;
> +
> + spin_lock_irqsave(&ctrl->lock, flags);
> + if (ctrl->state != NVME_CTRL_LIVE) {
> + spin_unlock_irqrestore(&ctrl->lock, flags);
> + atomic_inc(&ctrl->ccr_limit);
> + continue;
> + }
> +
> + /*
> + * We got a good candidate source controller that is locked and
> + * LIVE. However, no guarantee ctrl will not be deleted after
> + * ctrl->lock is released. Get a ref of both ctrl and admin_q
> + * so they do not disappear until we are done with them.
> + */
> + WARN_ON_ONCE(!blk_get_queue(ctrl->admin_q));
> + nvme_get_ctrl(ctrl);
> + spin_unlock_irqrestore(&ctrl->lock, flags);
> + sctrl = ctrl;
> + break;
> + }
> + mutex_unlock(&nvme_subsystems_lock);
> + return sctrl;
> +}
> +
> +static void nvme_put_ctrl_ccr(struct nvme_ctrl *sctrl)
> +{
> + atomic_inc(&sctrl->ccr_limit);
> + blk_put_queue(sctrl->admin_q);
> + nvme_put_ctrl(sctrl);
> +}
> +
> +static int nvme_issue_wait_ccr(struct nvme_ctrl *sctrl, struct nvme_ctrl *ictrl,
> + unsigned long deadline)
> +{
> + struct nvme_ccr_entry ccr = { };
> + union nvme_result res = { 0 };
> + struct nvme_command c = { };
> + unsigned long flags, now, tmo = 0;
> + bool completed = false;
> + int ret = 0;
> + u32 result;
> +
> + init_completion(&ccr.complete);
> + ccr.ictrl = ictrl;
> +
> + spin_lock_irqsave(&sctrl->lock, flags);
> + list_add_tail(&ccr.list, &sctrl->ccr_list);
> + spin_unlock_irqrestore(&sctrl->lock, flags);
> +
> + c.ccr.opcode = nvme_admin_cross_ctrl_reset;
> + c.ccr.ciu = ictrl->ciu;
> + c.ccr.icid = cpu_to_le16(ictrl->cntlid);
> + c.ccr.cirn = cpu_to_le64(ictrl->cirn);
> + ret = __nvme_submit_sync_cmd(sctrl->admin_q, &c, &res,
> + NULL, 0, NVME_QID_ANY, 0);
> + if (ret) {
> + ret = -EIO;
> + goto out;
> + }
> +
> + result = le32_to_cpu(res.u32);
> + if (result & 0x01) /* Immediate Reset Successful */
> + goto out;
> +
> + now = jiffies;
> + if (time_before(now, deadline))
> + tmo = min_t(unsigned long,
> + secs_to_jiffies(ictrl->kato), deadline - now);
> +
> + if (!wait_for_completion_timeout(&ccr.complete, tmo)) {
> + ret = -ETIMEDOUT;
> + goto out;
> + }
> +
> + completed = true;
> +
> +out:
> + spin_lock_irqsave(&sctrl->lock, flags);
> + list_del(&ccr.list);
> + spin_unlock_irqrestore(&sctrl->lock, flags);
> + if (completed) {
> + if (ccr.ccrs == NVME_CCR_STATUS_SUCCESS)
> + return 0;
> + return -EREMOTEIO;
> + }
> + return ret;
> +}
> +
> +int nvme_fence_ctrl(struct nvme_ctrl *ictrl)
> +{
> + unsigned long deadline, timeout;
> + struct nvme_ctrl *sctrl;
> + u32 min_cntlid = 0;
> + int ret;
> +
> + timeout = nvme_fence_timeout_ms(ictrl);
> + dev_info(ictrl->device, "attempting CCR, timeout %lums\n", timeout);
> +
> + deadline = jiffies + msecs_to_jiffies(timeout);
> + while (time_is_after_jiffies(deadline)) {
> + sctrl = nvme_find_ctrl_ccr(ictrl, min_cntlid);
> + if (!sctrl) {
> + dev_dbg(ictrl->device,
> + "failed to find source controller\n");
> + return -EIO;
> + }
> +
> + ret = nvme_issue_wait_ccr(sctrl, ictrl, deadline);
> + if (!ret) {
> + dev_info(ictrl->device, "CCR succeeded using %s\n",
> + dev_name(sctrl->device));
> + nvme_put_ctrl_ccr(sctrl);
> + return 0;
> + }
> +
> + min_cntlid = sctrl->cntlid + 1;
> + nvme_put_ctrl_ccr(sctrl);
> +
> + if (ret == -EIO) /* CCR command failed */
> + continue;
> +
> + /* CCR operation failed or timed out */
> + return ret;
> + }
> +
> + dev_info(ictrl->device, "CCR operation timeout\n");
> + return -ETIMEDOUT;
> +}
Please restructure the loop.
Having a comment 'CCR operation failed or timed out',
returning a status, and then have a comment
'CCR operation timeout' _after_ the return is confusing.
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare at suse.de +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich
More information about the Linux-nvme
mailing list