[PATCH v4 05/10] blk-mq: introduce blk_mq_hctx_map_queues
Ming Lei
ming.lei at redhat.com
Thu Nov 14 01:12:22 PST 2024
On Thu, Nov 14, 2024 at 08:54:46AM +0100, Daniel Wagner wrote:
> On Thu, Nov 14, 2024 at 09:58:25AM +0800, Ming Lei wrote:
> > > +void blk_mq_hctx_map_queues(struct blk_mq_queue_map *qmap,
> >
> > Some drivers may not know hctx at all, maybe blk_mq_map_hw_queues()?
>
> I am not really attach to the name, I am fine with renaming it to
> blk_mq_map_hw_queues.
>
> > > + if (dev->driver->irq_get_affinity)
> > > + irq_get_affinity = dev->driver->irq_get_affinity;
> > > + else if (dev->bus->irq_get_affinity)
> > > + irq_get_affinity = dev->bus->irq_get_affinity;
> >
> > It is one generic API, I think both 'dev->driver' and
> > 'dev->bus' should be validated here.
>
> What do you have in mind here if we get two masks? What should the
> operation be: AND, OR?
IMO you just need one callback to return the mask.
I feel driver should get higher priority, but in the probe() example,
call_driver_probe() actually tries bus->probe() first.
But looks not an issue for this patchset since only hisi_sas_v2_driver(platform_driver)
defines ->irq_get_affinity(), but the platform_bus_type doesn't have the callback.
>
> This brings up another topic I left out in this series.
> blk_mq_map_queues does almost the same thing except it starts with the
> mask returned by group_cpus_evenely. If we figure out how this could be
> combined in a sane way it's possible to cleanup even a bit more. A bunch
> of drivers do
>
> if (i != HCTX_TYPE_POLL && offset)
> blk_mq_hctx_map_queues(map, dev->dev, offset);
> else
> blk_mq_map_queues(map);
>
> IMO it would be nice just to have one blk_mq_map_queues() which handles
> this correctly for both cases.
I guess it is doable, and the driver just setup the tag_set->map[], then call
one generic map_queues API to do everything?
Thanks,
Ming
More information about the Linux-nvme
mailing list