[PATCH v7] nvme-fabrics: reject I/O to offline device
James Smart
james.smart at broadcom.com
Thu Jul 9 16:34:23 EDT 2020
On 7/8/2020 8:07 AM, Victor Gladkov wrote:
> Commands get stuck while Host NVMe controller (TCP or RDMA) is in
> reconnect state. NVMe controller enters into reconnect state when it
> loses connection with the target. It tries to reconnect every 10
> seconds (default) until successful reconnection or until reconnect
> time-out is reached. The default reconnect time out is 10 minutes.
>
> To fix this long delay due to the default timeout we introduce new
> session parameter "fast_io_fail_tmo". The timeout is measured in
> seconds from the controller reconnect, any command beyond that
> timeout is rejected. The new parameter value may be passed during
> 'connect'.
> The default value of 0 means no timeout (similar to current behavior).
>
> We add a new controller NVME_CTRL_FAILFAST_EXPIRED and respective
> delayed work that updates the NVME_CTRL_FAILFAST_EXPIRED flag.
>
> When the controller is entering the CONNECTING state, we schedule
> the delayed_work based on failfast timeout value. If the transition
> is out of CONNECTING, terminate delayed work item and ensure
> failfast_expired is false. If delayed work item expires then set
> "NVME_CTRL_FAILFAST_EXPIRED" flag to true.
>
> We also update nvmf_fail_nonready_command() and
> nvme_available_path() functions with check the
> "NVME_CTRL_FAILFAST_EXPIRED" controller flag.
>
> Signed-off-by: Victor Gladkov <victor.gladkov at kioxia.com>
> Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni at wdc.com>
> Reviewed-by: Hannes Reinecke <hare at suse.de>
>
> ...
> ---
> drivers/nvme/host/core.c | 49 ++++++++++++++++++++++++++++++++++++++++++-
> drivers/nvme/host/fabrics.c | 25 +++++++++++++++++++---
> drivers/nvme/host/fabrics.h | 5 +++++
> drivers/nvme/host/multipath.c | 5 ++++-
> drivers/nvme/host/nvme.h | 3 +++
> 5 files changed, 82 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index f3c037f..ca990bb 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -137,6 +137,37 @@ int nvme_try_sched_reset(struct nvme_ctrl *ctrl)
> }
> EXPORT_SYMBOL_GPL(nvme_try_sched_reset);
>
> +static void nvme_failfast_work(struct work_struct *work)
> +{
> + struct nvme_ctrl *ctrl = container_of(to_delayed_work(work),
> + struct nvme_ctrl, failfast_work);
> +
> + if (ctrl->state != NVME_CTRL_CONNECTING)
> + return;
> +
> + set_bit(NVME_CTRL_FAILFAST_EXPIRED, &ctrl->flags);
> + dev_info(ctrl->device, "failfast expired\n");
> + nvme_kick_requeue_lists(ctrl);
> +}
> +
> +static inline void nvme_start_failfast_work(struct nvme_ctrl *ctrl)
> +{
> + if (!ctrl->opts || ctrl->opts->fast_io_fail_tmo == -1)
> + return;
> +
> + schedule_delayed_work(&ctrl->failfast_work,
> + ctrl->opts->fast_io_fail_tmo * HZ);
> +}
> +
> +static inline void nvme_stop_failfast_work(struct nvme_ctrl *ctrl)
> +{
> + if (!ctrl->opts)
> + return;
> +
> + cancel_delayed_work_sync(&ctrl->failfast_work);
> + clear_bit(NVME_CTRL_FAILFAST_EXPIRED, &ctrl->flags);
> +}
> +
> int nvme_reset_ctrl(struct nvme_ctrl *ctrl)
> {
>
Looks good to me. Only nit I see is - why are we checking ctrl->opts in
the xxx_failfast_work() routines ? If that's actually null that's very
bad news. It should only be null of the controller is in the process of
being deleted, which should have terminated all these routine sequences
before then.
Otherwise...
Reviewed-by: James Smart <james.smart at broadcom.com>
-- james
More information about the Linux-nvme
mailing list