[RFC RESEND 16/16] nvme-pci: use blk_rq_dma_map() for NVMe SGL
Christoph Hellwig
hch at lst.de
Wed Mar 6 06:33:21 PST 2024
On Tue, Mar 05, 2024 at 08:51:56AM -0700, Keith Busch wrote:
> On Tue, Mar 05, 2024 at 01:18:47PM +0200, Leon Romanovsky wrote:
> > @@ -236,7 +236,9 @@ struct nvme_iod {
> > unsigned int dma_len; /* length of single DMA segment mapping */
> > dma_addr_t first_dma;
> > dma_addr_t meta_dma;
> > - struct sg_table sgt;
> > + struct dma_iova_attrs iova;
> > + dma_addr_t dma_link_address[128];
> > + u16 nr_dma_link_address;
> > union nvme_descriptor list[NVME_MAX_NR_ALLOCATIONS];
> > };
>
> That's quite a lot of space to add to the iod. We preallocate one for
> every request, and there could be millions of them.
Yes. And this whole proposal also seems clearly confused (not just
because of the gazillion reposts) but because it mixes up the case
where we can coalesce CPU regions into a single dma_addr_t range
(iommu and maybe in the future swiotlb) and one where we need a
dma_addr_t range per cpu range (direct misc cruft).
More information about the Linux-nvme
mailing list