[PATCH v5 3/3] nvme/pci: make PRP list DMA pools per-NUMA-node
Christoph Hellwig
hch at lst.de
Thu Apr 24 07:12:49 PDT 2025
On Tue, Apr 22, 2025 at 04:09:52PM -0600, Caleb Sander Mateos wrote:
> NVMe commands with more than 4 KB of data allocate PRP list pages from
> the per-nvme_device dma_pool prp_page_pool or prp_small_pool.
That's not actually true. We can transfer all of the MDTS without a
single pool allocation when using SGLs.
> Each call
> to dma_pool_alloc() and dma_pool_free() takes the per-dma_pool spinlock.
> These device-global spinlocks are a significant source of contention
> when many CPUs are submitting to the same NVMe devices. On a workload
> issuing 32 KB reads from 16 CPUs (8 hypertwin pairs) across 2 NUMA nodes
> to 23 NVMe devices, we observed 2.4% of CPU time spent in
> _raw_spin_lock_irqsave called from dma_pool_alloc and dma_pool_free.
>
> Ideally, the dma_pools would be per-hctx to minimize
> contention. But that could impose considerable resource costs in a
> system with many NVMe devices and CPUs.
Should we try to simply do a slab allocation first and only allocate
from the dmapool when that fails? That should give you all the
scalability from the slab allocator without very little downsides.
More information about the Linux-nvme
mailing list