Why NVMe MSIx vectors affinity set across NUMA nodes?
Ganapatrao Kulkarni
gklkml16 at gmail.com
Mon Jan 22 09:22:59 PST 2018
On Mon, Jan 22, 2018 at 10:44 PM, Keith Busch <keith.busch at intel.com> wrote:
> On Mon, Jan 22, 2018 at 09:55:55AM +0530, Ganapatrao Kulkarni wrote:
>> Hi,
>>
>> I have observed that NVMe driver splitting interrupt affinity of MSIx
>> vectors among available NUMA nodes,
>> any specific reason for that?
>>
>> i see this is happening due to pci flag PCI_IRQ_AFFINITY is set in
>> function nvme_setup_io_queues
>>
>> nr_io_queues = pci_alloc_irq_vectors(pdev, 1, nr_io_queues,
>> PCI_IRQ_ALL_TYPES | PCI_IRQ_AFFINITY);
>>
>> IMO, having all vectors on same node CPUs improves interrupt latency
>> than distributing among all nodes.
>
> What affinity maps are you seeing? It's not supposed to share one vector
> across two NUMA nodes, unless you simply don't have enough vectors.
There are 31 MSIx vectors getting initialised(one per NVMe queue) and
out of it, 0-15 are getting
affinity set to node 0 CPUs and vectors 16-30 are getting set to node 1.
My question is, why not set affinity to all vectors from same node
CPUs, what was need to use flag PCI_IRQ_AFFINITY?
thanks
Ganapat
More information about the Linux-nvme
mailing list