[PATCH 1/3] nvme: double KA polling frequency to avoid KATO with TBKAS on

Sagi Grimberg sagi at grimberg.me
Tue Apr 18 09:03:21 PDT 2023


> With TBKAS on, the completion of one command can defer sending a
> keep alive for up to twice the delay between successive runs of
> nvme_keep_alive_work. The current delay of KATO / 2 thus makes it
> possible for one command to defer sending a keep alive for up to
> KATO, which can result in the controller detecting a KATO. The following
> trace demonstrates the issue, taking KATO = 8 for simplicity:
> 
> 1. t = 0: run nvme_keep_alive_work, no keep-alive sent
> 2. t = ε: I/O completion seen, set comp_seen = true
> 3. t = 4: run nvme_keep_alive_work, see comp_seen == true,
>            skip sending keep-alive, set comp_seen = false
> 4. t = 8: run nvme_keep_alive_work, see comp_seen == false,
>            send a keep-alive command.
> 
> Here, there is a delay of 8 - ε between receiving a command completion
> and sending the next command. With ε small, the controller is likely to
> detect a keep alive timeout.
> 
> Fix this by running nvme_keep_alive_work with a delay of KATO / 4
> whenever TBKAS is on. Going through the above trace now gives us a
> worst-case delay of 4 - ε, which is in line with the recommendation of
> sending a command every KATO / 2 in the NVMe specification.
> 
> Reported-by: Costa Sapuntzakis <costa at purestorage.com>
> Reported-by: Randy Jennings <randyj at purestorage.com>
> Signed-off-by: Uday Shankar <ushankar at purestorage.com>
> Reviewed-by: Hannes Reinecke <hare at suse.de>
> ---
>   drivers/nvme/host/core.c | 8 +++++++-
>   1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 6c1e7d6709e0..1298c7b9bffb 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -1150,10 +1150,16 @@ EXPORT_SYMBOL_NS_GPL(nvme_passthru_end, NVME_TARGET_PASSTHRU);
>    *
>    *   The host should send Keep Alive commands at half of the Keep Alive Timeout
>    *   accounting for transport roundtrip times [..].
> + *
> + * When TBKAS is on, we need to run nvme_keep_alive_work at twice this
> + * frequency, as one command completion can postpone sending a keep alive
> + * command by up to twice the delay between runs.
>    */
>   static void nvme_queue_keep_alive_work(struct nvme_ctrl *ctrl)
>   {
> -	queue_delayed_work(nvme_wq, &ctrl->ka_work, ctrl->kato * HZ / 2);
> +	unsigned long delay = (ctrl->ctratt & NVME_CTRL_ATTR_TBKAS) ?
> +		ctrl->kato * HZ / 4 : ctrl->kato * HZ / 2;
> +	queue_delayed_work(nvme_wq, &ctrl->ka_work, delay);
>   }
>   
>   static enum rq_end_io_ret nvme_keep_alive_end_io(struct request *rq,

This looks fine to me, the only thing that is a bit concerning is that
we may excessively send keep-alive too frequently (default kato is 10
divide by 4 gives every 2 seconds).



More information about the Linux-nvme mailing list