[PATCH V4 0/3] blk-mq: fix blk_mq_alloc_request_hctx
Ming Lei
ming.lei at redhat.com
Thu Jul 15 05:08:41 PDT 2021
Hi,
blk_mq_alloc_request_hctx() is used by NVMe fc/rdma/tcp/loop to connect
io queue. Also the sw ctx is chosen as the 1st online cpu in hctx->cpumask.
However, all cpus in hctx->cpumask may be offline.
This usage model isn't well supported by blk-mq which supposes allocator is
always done on one online CPU in hctx->cpumask. This assumption is
related with managed irq, which also requires blk-mq to drain inflight
request in this hctx when the last cpu in hctx->cpumask is going to
offline.
However, NVMe fc/rdma/tcp/loop don't use managed irq, so we should allow
them to ask for request allocation when the specified hctx is inactive
(all cpus in hctx->cpumask are offline). Fix blk_mq_alloc_request_hctx() by
allowing to allocate request when all CPUs of this hctx are offline.
V4:
- remove patches for cleanup queue map helpers
- take Christoph's suggestion to add field into 'struct device' for
describing if managed irq is allocated from one device
V3:
- cleanup map queues helpers, and remove pci/virtio/rdma queue
helpers
- store use managed irq info into qmap
V2:
- use flag of BLK_MQ_F_MANAGED_IRQ
- pass BLK_MQ_F_MANAGED_IRQ from driver explicitly
- kill BLK_MQ_F_STACKING
Ming Lei (3):
driver core: mark device as irq affinity managed if any irq is managed
blk-mq: mark if one queue map uses managed irq
blk-mq: don't deactivate hctx if managed irq isn't used
block/blk-mq-pci.c | 1 +
block/blk-mq-rdma.c | 3 +++
block/blk-mq-virtio.c | 1 +
block/blk-mq.c | 27 +++++++++++++++++----------
block/blk-mq.h | 8 ++++++++
drivers/base/platform.c | 7 +++++++
drivers/pci/msi.c | 3 +++
include/linux/blk-mq.h | 3 ++-
include/linux/device.h | 1 +
9 files changed, 43 insertions(+), 11 deletions(-)
--
2.31.1
More information about the Linux-nvme
mailing list