[PATCH 0/3] blk-mq & nvme: introduce .map_changed

Ming Lei tom.leiming at gmail.com
Tue Sep 29 17:08:07 PDT 2015


On Wed, Sep 30, 2015 at 6:45 AM, Keith Busch <keith.busch at intel.com> wrote:
> On Tue, 29 Sep 2015, Ming Lei wrote:
>>
>> Yes, I thought of that before, but it has the following cons:
>>
>> - some drivers/devices may need different IRQ affinity policy, such as
>> virtio
>> devices which has its own set affinity handler(see
>> virtqueue_set_affinity()),
>
>
> That's not a very good example to support your cause; virtio_scsi's use
> is a perfect example for one that would benefit from letting blk-mq
> handle affinity. virtio_scsi sets affinity only when there is a 1:1
> mapping of cpu's to queue's, but this driver doesn't know the mapping
> that blk-mq used, creating a potentially less than optimal mapping.

The 1:1 mapping is introduced before blk-mq, and that doesn't mean we
have to do that for blk-mq.

Actualy I mean virtio-scsi just lets the 1st CPU of the cpumask handle
the virt-queue's irq, instead of all CPUs mapped to the hw queue(virt-queue).

>
>> - block core has to get the irq vector information which has to be
>> setup/finalized
>> before blk-mq uses that for setting irq affinity, for example, in case
>> NVMe's admin
>> queue, its vector can be changed after admin queue's initialization.
>
>
> Why do you want to put a hint on the admin queue's irq?

No, I don't want, and it is just a example, I mean other drivers/devices
may have this kind of situation too.

-- 
Ming Lei



More information about the Linux-nvme mailing list