[Question] IRQ Affinity Setup for NVMe over Fabrics (RDMA) Target and Host

Sagi Grimberg sagi at grimberg.me
Tue Aug 20 06:25:30 PDT 2024




On 19/08/2024 15:48, Jigao LUO wrote:
> Hi everyone,

Hey,

> I’m currently benchmarking NVMe over Fabrics (RDMA) using the Linux kernel driver and would appreciate your guidance on setting up IRQ affinity for optimal performance.
>
> - **Target Side**: It is logical to place both the NIC and SSDs within the same NUMA node to minimize latency, and to pin the IRQs to cores within this NUMA node. However, I'm unsure about the optimal number of cores that should be dedicated to handling IRQs. Are there any guidelines or best practices for determining this?

Steering the IRQ vectors to cpu cores on the same numa as the backend 
SSDs is definitely desirable if possible.
How much is hard to say. The more cores you spread the vectors among, 
the better. If you are trying to host other workloads along side nvmet-rdma,
then it depends on what are you optimizing for (as well as the platform 
you are using).

>
> - **Host Side**: Most of my questions relate to the host configuration. Assuming we are running fio on the host, my understanding is that both fio and IRQs should be on the NUMA node where the NIC is located. Should fio and IRQs share the same set of cores, or is it recommended to separate them with two isolated core sets? Additionally, what would be the recommended number of cores to allocate for IRQ handling in this scenario?

The nvme host design is where submission-completion are cpu local as 
much as possible, such that multiple threads/workloads do not interfere 
with each other and provide good scalability.
If you have prior knowledge of you workload, you may prefer to dedicate 
some cores for completion processing and have the associated load 
removed from the cores that are submitting I/O.
I don't know of any specific guideline on how one would arrange the 
cores and IRQ vectors to optimize any given workload. Empirical tests 
may guide you.



More information about the Linux-nvme mailing list