[PATCHv2 0/3] nvme: start keep-alive after admin queue setup
Mark O'Donovan
shiftee at posteo.net
Thu Oct 26 09:00:08 PDT 2023
Hi Hannes,
Testing with v1 of this patchset definitely seems
to have solved the problem.
With a 5s kato the connection below would have timed out.
The column to the left of ctrl/chap is milliseconds
since admin queue connect:
[ 229.079675][ T640] nvme nvme0: qid 0: authenticated
[ 229.087243][ T640] 00000169: ctrl:000000001221a9b6 <NEW KEEP-ALIVE START TIME>
[ 229.091241][ T640] nvme nvme0: creating 16 I/O queues.
[ 229.257482][ T112] 00000340: chap:00000000db978430 q01 nvme_queue_auth_work auth successful
[ 229.412266][ T112] 00000494: chap:0000000062bd76c8 q02 nvme_queue_auth_work auth successful
...
[ 231.425002][ T112] 00002507: chap:0000000051ca4f9a q15 nvme_queue_auth_work auth successful
[ 231.579549][ T112] 00002662: chap:0000000008e8cfab q16 nvme_queue_auth_work auth successful
[ 231.582339][ T640] <OLD KEEP-ALIVE START TIME>
[ 231.832703][ T112] 00002915: ctrl:000000001221a9b6 nvme_keep_alive_work sending keep-alive
I have encountered another issue on my qemu vm where the first
call after power up to nvme_auth_process_dhchap_challenge() takes
almost 3s. The delay comes from crypto_alloc_tfm_node().
This patchset does not solve that particular problem.
I have submitted the patch below for it, but both would be required in
my opinion.
http://lists.infradead.org/pipermail/linux-nvme/2023-October/042862.html
Regards,
Mark
More information about the Linux-nvme
mailing list