[Question] IRQ Affinity Setup for NVMe over Fabrics (RDMA) Target and Host
Jigao LUO
jigao.luo at outlook.com
Mon Aug 19 05:48:00 PDT 2024
Hi everyone,
I’m currently benchmarking NVMe over Fabrics (RDMA) using the Linux kernel driver and would appreciate your guidance on setting up IRQ affinity for optimal performance.
- **Target Side**: It is logical to place both the NIC and SSDs within the same NUMA node to minimize latency, and to pin the IRQs to cores within this NUMA node. However, I'm unsure about the optimal number of cores that should be dedicated to handling IRQs. Are there any guidelines or best practices for determining this?
- **Host Side**: Most of my questions relate to the host configuration. Assuming we are running fio on the host, my understanding is that both fio and IRQs should be on the NUMA node where the NIC is located. Should fio and IRQs share the same set of cores, or is it recommended to separate them with two isolated core sets? Additionally, what would be the recommended number of cores to allocate for IRQ handling in this scenario?
I appreciate your assistance and look forward to your recommendations.
Best regards,
Jigao
More information about the Linux-nvme
mailing list