Why NVMe MSIx vectors affinity set across NUMA nodes?
Sagi Grimberg
sagi at grimberg.me
Tue Jan 23 05:30:12 PST 2018
>> AFAIK, usually drivers sets default affinity and it is likely be node
>> affinity for NUMA systems.
>> Later, it is the user-space(like irqbalance etc) which decides the
>> affinity not the driver.
>
> Relying on userspace to provide an optimal setting is a bad idea,
> especially for NVMe where we have submission queue cpu affinity that
> doesn't work very efficiently if the completion affinity doesn't match.
I tend to agree, also application locality is equally as important as
device locality. so spreading across numa nodes will help applications
running on the far numa node as well.
Also, a recent thread [1] related to PCI_IRQ_AFFINITY not allowing
userspace to modify irq affinity suggested that maybe that can be
supported, but not sure what happened to it.
[1] https://www.spinics.net/lists/netdev/msg464301.html
More information about the Linux-nvme
mailing list