Why NVMe MSIx vectors affinity set across NUMA nodes?

Ganapatrao Kulkarni gklkml16 at gmail.com
Mon Jan 22 09:55:45 PST 2018


On Mon, Jan 22, 2018 at 11:02 PM, Keith Busch <keith.busch at intel.com> wrote:
> On Mon, Jan 22, 2018 at 10:52:59PM +0530, Ganapatrao Kulkarni wrote:
>>
>> There are 31 MSIx vectors getting initialised(one per NVMe queue) and
>> out of it, 0-15 are getting
>> affinity set to node 0 CPUs and vectors 16-30 are getting set to node 1.
>> My question is, why not set affinity to all vectors from same node
>> CPUs, what was need to use flag PCI_IRQ_AFFINITY?
>
> I'm sorry, but I am not able to parse this question.

ok, let me rephrase,
what was the need to use flag PCI_IRQ_AFFINITY in NVMe driver?
i dont see this flag being used widely.



More information about the Linux-nvme mailing list