Why NVMe MSIx vectors affinity set across NUMA nodes?
Keith Busch
keith.busch at intel.com
Mon Jan 22 09:14:37 PST 2018
On Mon, Jan 22, 2018 at 09:55:55AM +0530, Ganapatrao Kulkarni wrote:
> Hi,
>
> I have observed that NVMe driver splitting interrupt affinity of MSIx
> vectors among available NUMA nodes,
> any specific reason for that?
>
> i see this is happening due to pci flag PCI_IRQ_AFFINITY is set in
> function nvme_setup_io_queues
>
> nr_io_queues = pci_alloc_irq_vectors(pdev, 1, nr_io_queues,
> PCI_IRQ_ALL_TYPES | PCI_IRQ_AFFINITY);
>
> IMO, having all vectors on same node CPUs improves interrupt latency
> than distributing among all nodes.
What affinity maps are you seeing? It's not supposed to share one vector
across two NUMA nodes, unless you simply don't have enough vectors.
More information about the Linux-nvme
mailing list