[PATCH V3 1/8] nvme: Let the blocklayer set timeouts for requests
Daniel Wagner
dwagner at suse.de
Wed Apr 22 02:58:53 PDT 2026
On Fri, Apr 10, 2026 at 09:39:17AM +0200, Maurizio Lombardi wrote:
> From: "Heyne, Maximilian" <mheyne at amazon.de>
>
> When initializing an nvme request which is about to be send to the block
> layer, we do not need to initialize its timeout. If it's left
> uninitialized at 0 the block layer will use the request queue's timeout
> in blk_add_timer (via nvme_start_request which is called from
> nvme_*_queue_rq). These timeouts are setup to either NVME_IO_TIMEOUT or
> NVME_ADMIN_TIMEOUT when the request queues were created.
>
> Because the io_timeout of the IO queues can be modified via sysfs, the
> following situation can occur:
>
> 1) NVME_IO_TIMEOUT = 30 (default module parameter)
> 2) nvme1n1 is probed. IO queues default timeout is 30 s
> 3) manually change the IO timeout to 90 s
> echo 90000 > /sys/class/nvme/nvme1/nvme1n1/queue/io_timeout
> 4) Any call of __submit_sync_cmd on nvme1n1 to an IO queue will issue
> commands with the 30 s timeout instead of the wanted 90 s which might
> be more suitable for this device.
>
> Commit 470e900c8036 ("nvme: refactor nvme_alloc_request") silently
> changed the behavior for ioctl's already because it unconditionally
> overrides the request's timeout that was set in nvme_init_request. If it
> was unset by the user of the ioctl if will be overridden with 0 meaning
> the block layer will pick the request queue's IO timeout.
>
> Following up on that, this patch further improves the consistency of IO
> timeout usage. However, there are still uses of NVME_IO_TIMEOUT which
> could be inconsistent with what is set in the device's request_queue by
> the user.
>
> Reviewed-by: Mohamed Khalfella <mkhalfella at purestorage.com>
> Signed-off-by: Maximilian Heyne <mheyne at amazon.de>
Maurizio, you need to add your SoB too.
Besides this,
Reviewed-by: Daniel Wagner <dwagner at suse.de>
More information about the Linux-nvme
mailing list