[LSF/MM/BPF TOPIC] Topology-Aware NVMe-TCP I/O Queue Scaling and Worker Efficiency

Chaitanya Kulkarni chaitanyak at nvidia.com
Sun Feb 15 16:35:21 PST 2026


On 2/15/26 09:06, Nilay Shroff wrote:

> The NVMe-TCP host driver currently provisions I/O queues primarily based on CPU
> availability rather than the capabilities and topology of the underlying network
> interface. On modern systems with many CPUs but fewer NIC hardware queues, this
> can lead to multiple NVMe-TCP I/O queues contending for the same transmit/receive
> queue, increasing lock contention, cacheline bouncing, and tail latency.


Can you share any performance work that you have done prior to the

LSF session ?

-ck





More information about the Linux-nvme mailing list