[RFC RESEND 00/16] Split IOMMU DMA mapping operation to two steps

Zeng, Oak oak.zeng at intel.com
Tue Jun 11 11:26:23 PDT 2024


Thank you Leon. That is helpful.

I also have another very naïve question. I don't understand what is the iova address. I previously thought the iova address space is the same as the dma_address space when iommu is involved. I thought the dma_alloc_iova would allocate some contiguous iova address range and later dma_link_range function would link a physical page to the iova address and return the iova address. In other words, I thought the dma_address is iova address, and the iommu page table translate a dma_address or iova address to the physical address.

But from my print below, my above understanding is obviously wrong: the iova.dma_addr is 0 and the dma_address returned from dma_link_range is none zero... Can you help me what is iova address? Is iova address iommu related? Since dma_link_range returns a non-iova address, does this function allocate the dma-address itself? Is dma-address correlated with iova address?

Oak 

> -----Original Message-----
> From: Leon Romanovsky <leon at kernel.org>
> Sent: Tuesday, June 11, 2024 11:45 AM
> To: Zeng, Oak <oak.zeng at intel.com>
> Cc: Jason Gunthorpe <jgg at ziepe.ca>; Christoph Hellwig <hch at lst.de>; Robin
> Murphy <robin.murphy at arm.com>; Marek Szyprowski
> <m.szyprowski at samsung.com>; Joerg Roedel <joro at 8bytes.org>; Will
> Deacon <will at kernel.org>; Chaitanya Kulkarni <chaitanyak at nvidia.com>;
> Brost, Matthew <matthew.brost at intel.com>; Hellstrom, Thomas
> <thomas.hellstrom at intel.com>; Jonathan Corbet <corbet at lwn.net>; Jens
> Axboe <axboe at kernel.dk>; Keith Busch <kbusch at kernel.org>; Sagi
> Grimberg <sagi at grimberg.me>; Yishai Hadas <yishaih at nvidia.com>;
> Shameer Kolothum <shameerali.kolothum.thodi at huawei.com>; Tian, Kevin
> <kevin.tian at intel.com>; Alex Williamson <alex.williamson at redhat.com>;
> Jérôme Glisse <jglisse at redhat.com>; Andrew Morton <akpm at linux-
> foundation.org>; linux-doc at vger.kernel.org; linux-kernel at vger.kernel.org;
> linux-block at vger.kernel.org; linux-rdma at vger.kernel.org;
> iommu at lists.linux.dev; linux-nvme at lists.infradead.org;
> kvm at vger.kernel.org; linux-mm at kvack.org; Bart Van Assche
> <bvanassche at acm.org>; Damien Le Moal
> <damien.lemoal at opensource.wdc.com>; Amir Goldstein
> <amir73il at gmail.com>; josef at toxicpanda.com; Martin K. Petersen
> <martin.petersen at oracle.com>; daniel at iogearbox.net; Williams, Dan J
> <dan.j.williams at intel.com>; jack at suse.com; Zhu Yanjun
> <zyjzyj2000 at gmail.com>; Bommu, Krishnaiah
> <krishnaiah.bommu at intel.com>; Ghimiray, Himal Prasad
> <himal.prasad.ghimiray at intel.com>
> Subject: Re: [RFC RESEND 00/16] Split IOMMU DMA mapping operation to
> two steps
> 
> On Mon, Jun 10, 2024 at 09:28:04PM +0000, Zeng, Oak wrote:
> > Hi Jason, Leon,
> >
> > I was able to fix the issue from my side. Things work fine now. I got two
> questions though:
> >
> > 1) The value returned from dma_link_range function is not contiguous, see
> below print. The "linked pa" is the function return.
> > I think dma_map_sgtable API would return some contiguous dma address.
> Is the dma-map_sgtable api is more efficient regarding the iommu page table?
> i.e., try to use bigger page size, such as use 2M page size when it is possible.
> With your new API, does it also have such consideration? I vaguely
> remembered Jason mentioned such thing, but my print below doesn't look
> like so. Maybe I need to test bigger range (only 16 pages range in the test of
> below printing). Comment?
> 
> My API gives you the flexibility to use any page size you want. You can
> use 2M pages instead of 4K pages. The API doesn't enforce any page size.
> 
> >
> > [17584.665126] drm_svm_hmmptr_map_dma_pages iova.dma_addr = 0x0,
> linked pa = 18ef3f000
> > [17584.665146] drm_svm_hmmptr_map_dma_pages iova.dma_addr = 0x0,
> linked pa = 190d00000
> > [17584.665150] drm_svm_hmmptr_map_dma_pages iova.dma_addr = 0x0,
> linked pa = 190024000
> > [17584.665153] drm_svm_hmmptr_map_dma_pages iova.dma_addr = 0x0,
> linked pa = 178e89000
> >
> > 2) in the comment of dma_link_range function, it is said: " @dma_offset
> needs to be advanced by the caller with the size of previous page that was
> linked + DMA address returned for the previous page".
> > Is this description correct? I don't understand the part "+ DMA address
> returned for the previous page ".
> > In my codes, let's say I call this function to link 10 pages, the first
> dma_offset is 0, second is 4k, third 8k. This worked for me. I didn't add the
> previously returned dma address.
> > Maybe I need more test. But any comment?
> 
> You did it perfectly right. This is the correct way to advance dma_offset.
> 
> Thanks
> 
> >
> > Thanks,
> > Oak
> >
> > > -----Original Message-----
> > > From: Jason Gunthorpe <jgg at ziepe.ca>
> > > Sent: Monday, June 10, 2024 1:25 PM
> > > To: Zeng, Oak <oak.zeng at intel.com>
> > > Cc: Leon Romanovsky <leon at kernel.org>; Christoph Hellwig
> <hch at lst.de>;
> > > Robin Murphy <robin.murphy at arm.com>; Marek Szyprowski
> > > <m.szyprowski at samsung.com>; Joerg Roedel <joro at 8bytes.org>; Will
> > > Deacon <will at kernel.org>; Chaitanya Kulkarni <chaitanyak at nvidia.com>;
> > > Brost, Matthew <matthew.brost at intel.com>; Hellstrom, Thomas
> > > <thomas.hellstrom at intel.com>; Jonathan Corbet <corbet at lwn.net>;
> Jens
> > > Axboe <axboe at kernel.dk>; Keith Busch <kbusch at kernel.org>; Sagi
> > > Grimberg <sagi at grimberg.me>; Yishai Hadas <yishaih at nvidia.com>;
> > > Shameer Kolothum <shameerali.kolothum.thodi at huawei.com>; Tian,
> Kevin
> > > <kevin.tian at intel.com>; Alex Williamson <alex.williamson at redhat.com>;
> > > Jérôme Glisse <jglisse at redhat.com>; Andrew Morton <akpm at linux-
> > > foundation.org>; linux-doc at vger.kernel.org; linux-
> kernel at vger.kernel.org;
> > > linux-block at vger.kernel.org; linux-rdma at vger.kernel.org;
> > > iommu at lists.linux.dev; linux-nvme at lists.infradead.org;
> > > kvm at vger.kernel.org; linux-mm at kvack.org; Bart Van Assche
> > > <bvanassche at acm.org>; Damien Le Moal
> > > <damien.lemoal at opensource.wdc.com>; Amir Goldstein
> > > <amir73il at gmail.com>; josef at toxicpanda.com; Martin K. Petersen
> > > <martin.petersen at oracle.com>; daniel at iogearbox.net; Williams, Dan J
> > > <dan.j.williams at intel.com>; jack at suse.com; Zhu Yanjun
> > > <zyjzyj2000 at gmail.com>; Bommu, Krishnaiah
> > > <krishnaiah.bommu at intel.com>; Ghimiray, Himal Prasad
> > > <himal.prasad.ghimiray at intel.com>
> > > Subject: Re: [RFC RESEND 00/16] Split IOMMU DMA mapping operation to
> > > two steps
> > >
> > > On Mon, Jun 10, 2024 at 04:40:19PM +0000, Zeng, Oak wrote:
> > > > Thanks Leon and Yanjun for the reply!
> > > >
> > > > Based on the reply, we will continue use the current version for
> > > > test (as it is tested for vfio and rdma). We will switch to v1 once
> > > > it is fully tested/reviewed.
> > >
> > > I'm glad you are finding it useful, one of my interests with this work
> > > is to improve all the HMM users.
> > >
> > > Jason



More information about the Linux-nvme mailing list