[PATCH 1/3] nvme: double KA polling frequency to avoid KATO with TBKAS on
Hannes Reinecke
hare at suse.de
Tue Apr 18 09:48:19 PDT 2023
On 4/18/23 18:03, Sagi Grimberg wrote:
>
>> With TBKAS on, the completion of one command can defer sending a
>> keep alive for up to twice the delay between successive runs of
>> nvme_keep_alive_work. The current delay of KATO / 2 thus makes it
>> possible for one command to defer sending a keep alive for up to
>> KATO, which can result in the controller detecting a KATO. The following
>> trace demonstrates the issue, taking KATO = 8 for simplicity:
>>
>> 1. t = 0: run nvme_keep_alive_work, no keep-alive sent
>> 2. t = ε: I/O completion seen, set comp_seen = true
>> 3. t = 4: run nvme_keep_alive_work, see comp_seen == true,
>> skip sending keep-alive, set comp_seen = false
>> 4. t = 8: run nvme_keep_alive_work, see comp_seen == false,
>> send a keep-alive command.
>>
>> Here, there is a delay of 8 - ε between receiving a command completion
>> and sending the next command. With ε small, the controller is likely to
>> detect a keep alive timeout.
>>
>> Fix this by running nvme_keep_alive_work with a delay of KATO / 4
>> whenever TBKAS is on. Going through the above trace now gives us a
>> worst-case delay of 4 - ε, which is in line with the recommendation of
>> sending a command every KATO / 2 in the NVMe specification.
>>
>> Reported-by: Costa Sapuntzakis <costa at purestorage.com>
>> Reported-by: Randy Jennings <randyj at purestorage.com>
>> Signed-off-by: Uday Shankar <ushankar at purestorage.com>
>> Reviewed-by: Hannes Reinecke <hare at suse.de>
>> ---
>> drivers/nvme/host/core.c | 8 +++++++-
>> 1 file changed, 7 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
>> index 6c1e7d6709e0..1298c7b9bffb 100644
>> --- a/drivers/nvme/host/core.c
>> +++ b/drivers/nvme/host/core.c
>> @@ -1150,10 +1150,16 @@ EXPORT_SYMBOL_NS_GPL(nvme_passthru_end,
>> NVME_TARGET_PASSTHRU);
>> *
>> * The host should send Keep Alive commands at half of the Keep
>> Alive Timeout
>> * accounting for transport roundtrip times [..].
>> + *
>> + * When TBKAS is on, we need to run nvme_keep_alive_work at twice this
>> + * frequency, as one command completion can postpone sending a keep
>> alive
>> + * command by up to twice the delay between runs.
>> */
>> static void nvme_queue_keep_alive_work(struct nvme_ctrl *ctrl)
>> {
>> - queue_delayed_work(nvme_wq, &ctrl->ka_work, ctrl->kato * HZ / 2);
>> + unsigned long delay = (ctrl->ctratt & NVME_CTRL_ATTR_TBKAS) ?
>> + ctrl->kato * HZ / 4 : ctrl->kato * HZ / 2;
>> + queue_delayed_work(nvme_wq, &ctrl->ka_work, delay);
>> }
>> static enum rq_end_io_ret nvme_keep_alive_end_io(struct request *rq,
>
> This looks fine to me, the only thing that is a bit concerning is that
> we may excessively send keep-alive too frequently (default kato is 10
> divide by 4 gives every 2 seconds).
Well, this is with TBKAS on, so we're sending keep-alives only if there
is no other traffic on the wire. And then sending a keep-alive every two
seconds is hardly exacting.
Cheers,
Hannes
More information about the Linux-nvme
mailing list