[PATCH v1] blk-mq: add one blk_mq_req_flags_t type to support mq ctx fallback
Sagi Grimberg
sagi at grimberg.me
Mon Oct 21 00:05:34 PDT 2024
On 21/10/2024 4:39, Ming Lei wrote:
> On Sun, Oct 20, 2024 at 10:40:41PM +0800, zhuxiaohui wrote:
>> From: Zhu Xiaohui <zhuxiaohui.400 at bytedance.com>
>>
>> It is observed that nvme connect to a nvme over fabric target will
>> always fail when 'nohz_full' is set.
>>
>> In commit a46c27026da1 ("blk-mq: don't schedule block kworker on
>> isolated CPUs"), it clears hctx->cpumask for all isolate CPUs,
>> and when nvme connect to a remote target, it may fails on this stack:
>>
>> blk_mq_alloc_request_hctx+1
>> __nvme_submit_sync_cmd+106
>> nvmf_connect_io_queue+181
>> nvme_tcp_start_queue+293
>> nvme_tcp_setup_ctrl+948
>> nvme_tcp_create_ctrl+735
>> nvmf_dev_write+532
>> vfs_write+237
>> ksys_write+107
>> do_syscall_64+128
>> entry_SYSCALL_64_after_hwframe+118
>>
>> due to that the given blk_mq_hw_ctx->cpumask is cleared with no available
>> blk_mq_ctx on the hw queue.
>>
>> This patch introduce a new blk_mq_req_flags_t flag 'BLK_MQ_REQ_ARB_MQ'
>> as well as a nvme_submit_flags_t 'NVME_SUBMIT_ARB_MQ' which are used to
>> indicate that block layer can fallback to a blk_mq_ctx whose cpu
>> is not isolated.
> blk_mq_alloc_request_hctx()
> ...
> cpu = cpumask_first_and(data.hctx->cpumask, cpu_online_mask);
> ...
>
> It can happen in case of non-cpu-isolation too, such as when this hctx hasn't
> online CPUs, both are same actually from this viewpoint.
>
> It is one long-time problem for nvme fc.
For what nvmf is using blk_mq_alloc_request_hctx() is not important. It
just needs a tag from that hctx. the request execution is running where
blk_mq_alloc_request_hctx() is running.
More information about the Linux-nvme
mailing list