[PATCH 2/2] nvme-multipath: don't block on blk_queue_enter of the underlying device
Hannes Reinecke
hare at suse.de
Tue Mar 23 07:28:41 GMT 2021
On 3/23/21 3:57 AM, Sagi Grimberg wrote:
>
>> When we reset/teardown a controller, we must freeze and quiesce the
>> namespaces request queues to make sure that we safely stop inflight I/O
>> submissions. Freeze is mandatory because if our hctx map changed between
>> reconnects, blk_mq_update_nr_hw_queues will immediately attempt to freeze
>> the queue, and if it still has pending submissions (that are still
>> quiesced) it will hang.
>>
>> However, by freezing the namespaces request queues, and only unfreezing
>> them when we successfully reconnect, inflight submissions that are
>> running concurrently can now block grabbing the nshead srcu until either
>> we successfully reconnect or ctrl_loss_tmo expired (or the user
>> explicitly disconnected).
>>
>> This caused a deadlock when a different controller (different path on the
>> same subsystem) became live (i.e. optimized/non-optimized). This is
>> because nvme_mpath_set_live needs to synchronize the nshead srcu before
>> requeueing I/O in order to make sure that current_path is visible to
>> future (re-)submisions. However the srcu lock is taken by a blocked
>> submission on a frozen request queue, and we have a deadlock.
>>
>> In order to fix this use the blk_mq_submit_bio_direct API to submit the
>> bio to the low-level driver, which does not block on the queue free
>> but instead allows nvme-multipath to pick another path or queue up the
>> bio.
>
> Almost...
>
> This still has the same issue but instead of blocking on
> blk_queue_enter() it is blocked on blk_mq_get_tag():
> --
> __schedule+0x22b/0x6e0
> schedule+0x46/0xb0
> io_schedule+0x42/0x70
> blk_mq_get_tag+0x11d/0x270
> ? blk_bio_segment_split+0x235/0x2a0
> ? finish_wait+0x80/0x80
> __blk_mq_alloc_request+0x65/0xe0
> blk_mq_submit_bio+0x144/0x500
> blk_mq_submit_bio_direct+0x78/0xa0
> nvme_ns_head_submit_bio+0xc3/0x2f0 [nvme_core]
> __submit_bio_noacct+0xcf/0x2e0
> __blkdev_direct_IO+0x413/0x440
> ? __io_complete_rw.constprop.0+0x150/0x150
> generic_file_read_iter+0x92/0x160
> io_iter_do_read+0x1a/0x40
> io_read+0xc5/0x350
> ? common_interrupt+0x14/0xa0
> ? update_load_avg+0x7a/0x5e0
> io_issue_sqe+0xa28/0x1020
> ? lock_timer_base+0x61/0x80
> io_wq_submit_work+0xaa/0x120
> io_worker_handle_work+0x121/0x330
> io_wqe_worker+0xb6/0x190
> ? io_worker_handle_work+0x330/0x330
> ret_from_fork+0x22/0x30
> --
>
> --
> ? usleep_range+0x80/0x80
> __schedule+0x22b/0x6e0
> ? usleep_range+0x80/0x80
> schedule+0x46/0xb0
> schedule_timeout+0xff/0x140
> ? del_timer_sync+0x67/0xb0
> ? __prepare_to_swait+0x4b/0x70
> __wait_for_common+0xb3/0x160
> __synchronize_srcu.part.0+0x75/0xe0
> ? __bpf_trace_rcu_utilization+0x10/0x10
> nvme_mpath_set_live+0x61/0x130 [nvme_core]
> nvme_update_ana_state+0xd7/0x100 [nvme_core]
> nvme_parse_ana_log+0xa5/0x160 [nvme_core]
> ? nvme_mpath_set_live+0x130/0x130 [nvme_core]
> nvme_read_ana_log+0x7b/0xe0 [nvme_core]
> process_one_work+0x1e6/0x380
> worker_thread+0x49/0x300
> --
>
Actually, I had been playing around with marking the entire bio as
'NOWAIT'; that would avoid the tag stall, too:
@@ -313,7 +316,7 @@ blk_qc_t nvme_ns_head_submit_bio(struct bio *bio)
ns = nvme_find_path(head);
if (likely(ns)) {
bio_set_dev(bio, ns->disk->part0);
- bio->bi_opf |= REQ_NVME_MPATH;
+ bio->bi_opf |= REQ_NVME_MPATH | REQ_NOWAIT;
trace_block_bio_remap(bio, disk_devt(ns->head->disk),
bio->bi_iter.bi_sector);
ret = submit_bio_noacct(bio);
My only worry here is that we might incur spurious failures under high
load; but then this is not necessarily a bad thing.
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare at suse.de +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer
More information about the Linux-nvme
mailing list