[Ksummit-discuss] [TECH TOPIC] IRQ affinity

Jens Axboe axboe at kernel.dk
Wed Jul 15 10:25:55 PDT 2015


On 07/15/2015 11:19 AM, Keith Busch wrote:
> On Wed, 15 Jul 2015, Bart Van Assche wrote:
>> * With blk-mq and scsi-mq optimal performance can only be achieved if
>>  the relationship between MSI-X vector and NUMA node does not change
>>  over time. This is necessary to allow a blk-mq/scsi-mq driver to
>>  ensure that interrupts are processed on the same NUMA node as the
>>  node on which the data structures for a communication channel have
>>  been allocated. However, today there is no API that allows
>>  blk-mq/scsi-mq drivers and irqbalanced to exchange information
>>  about the relationship between MSI-X vector ranges and NUMA nodes.
>
> We could have low-level drivers provide blk-mq the controller's irq
> associated with a particular h/w context, and the block layer can provide
> the context's cpumask to irqbalance with the smp affinity hint.
>
> The nvme driver already uses the hwctx cpumask to set hints, but this
> doesn't seems like it should be a driver responsibility. It currently
> doesn't work correctly anyway with hot-cpu since blk-mq could rebalance
> the h/w contexts without syncing with the low-level driver.
>
> If we can add this to blk-mq, one additional case to consider is if the
> same interrupt vector is used with multiple h/w contexts. Blk-mq's cpu
> assignment needs to be aware of this to prevent sharing a vector across
> NUMA nodes.

Exactly. I may have promised to do just that at the last LSF/MM 
conference, just haven't done it yet. The point is to share the mask, 
I'd ideally like to take it all the way where the driver just asks for a 
number of vecs through a nice API that takes care of all this. Lots of 
duplicated code in drivers for this these days, and it's a mess.

-- 
Jens Axboe




More information about the Linux-nvme mailing list