[PATCH 1/6] blk-mq: introduce blk_mq_hctx_map_queues

Christoph Hellwig hch at lst.de
Sun Sep 15 23:48:46 PDT 2024


On Fri, Sep 13, 2024 at 11:26:54AM -0500, Bjorn Helgaas wrote:
> > +const struct cpumask *pci_get_blk_mq_affinity(void *dev_data, int offset,
> > +					      int queue)
> > +{
> > +	struct pci_dev *pdev = dev_data;
> > +
> > +	return pci_irq_get_affinity(pdev, offset + queue);
> > +}
> > +EXPORT_SYMBOL_GPL(pci_get_blk_mq_affinity);
> > +#endif
> 
> IMO this doesn't really fit well in drivers/pci since it doesn't add
> any PCI-specific knowledge or require any PCI core internals, and the
> parameters are blk-specific.  I don't object to the code, but it seems
> like it could go somewhere in block/?

That's where it, or rather the current equivalent, lives, which is a bit
silly.  That being said, I suspect the nicest thing would be to offer a
real irq_get_affinity interface at the bus level.

e.g. add something like:


	const struct cpumask *(*irq_get_affinity(struct device *dev,
			unsigned int irq_vec);

to struct bus_type so that any layer can just query the irq affinity
for buses that support it without extra glue code.



More information about the Linux-nvme mailing list