[PATCH v2 1/6] blk-mq: introduce blk_mq_hctx_map_queues
Daniel Wagner
dwagner at suse.de
Mon Nov 11 23:08:30 PST 2024
On Tue, Nov 12, 2024 at 05:47:36AM +0100, Christoph Hellwig wrote:
> This seems to mix up a few different things:
>
> 1) adding a new bus operation
> 2) implementations of that operation for PCI and virtio
> 3) a block layer consumer of the operation
>
> all these really should be separate per-subsystem patches.
>
> You'll also need to Cc the driver model maintainers.
Ok, will do.
> > +void blk_mq_hctx_map_queues(struct blk_mq_queue_map *qmap,
> > + struct device *dev, unsigned int offset,
> > + get_queue_affinity_fn *get_irq_affinity)
> > +{
> > + const struct cpumask *mask = NULL;
> > + unsigned int queue, cpu;
> > +
> > + for (queue = 0; queue < qmap->nr_queues; queue++) {
> > + if (get_irq_affinity)
> > + mask = get_irq_affinity(dev, queue + offset);
> > + else if (dev->bus->irq_get_affinity)
> > + mask = dev->bus->irq_get_affinity(dev, queue + offset);
> > +
> > + if (!mask)
> > + goto fallback;
> > +
> > + for_each_cpu(cpu, mask)
> > + qmap->mq_map[cpu] = qmap->queue_offset + queue;
> > + }
>
> This does different things with a NULL argument vs not. Please split it
> into two separate helpers. The non-NULL case is only uses in hisi_sas,
> so it might be worth just open coding it there as well.
I'd like to avoid the open coding case, because this will make the
following patches to support the isolated cpu patches more complex.
Having a central place where the all the mask operation are, makes it a
bit simpler streamlined. But then I could open code that part as well.
> > +static const struct cpumask *pci_device_irq_get_affinity(struct device *dev,
> > + unsigned int irq_vec)
> > +{
> > + struct pci_dev *pdev = to_pci_dev(dev);
> > +
> > + return pci_irq_get_affinity(pdev, irq_vec);
>
> Nit: this could be shortened to:
>
> return pci_irq_get_affinity(to_pci_dev(dev), irq_vec);
Ok.
Thanks,
Daniel
More information about the Linux-nvme
mailing list