[PATCH v1 00/17] Provide a new two step DMA mapping API
Jason Gunthorpe
jgg at ziepe.ca
Wed Nov 13 10:41:29 PST 2024
On Tue, Nov 12, 2024 at 07:01:08AM +0100, Christoph Hellwig wrote:
> On Fri, Nov 08, 2024 at 11:38:46AM -0400, Jason Gunthorpe wrote:
> > > > What I'm thinking about is replacing code like the above with something like:
> > > >
> > > > if (p2p_provider)
> > > > return DMA_MAPPING_ERROR;
> > > >
> > > > And the caller is the one that would have done is_pci_p2pdma_page()
> > > > and either passes p2p_provider=NULL or page->pgmap->p2p_provider.
> > >
> > > And where do you get that one from?
> >
> > Which one?
>
> The p2p_provider thing (whatever that will actually be).
p2p_provider would be splitting out the information in
pci_p2pdma_pagemap to it's own type:
struct pci_p2pdma_pagemap {
struct pci_dev *provider;
u64 bus_offset;
That is the essential information to compute PCI_P2PDMA_MAP_*.
For example when blk_rq_dma_map_iter_start() calls pci_p2pdma_state(),
it has this information from page->pgmap. It would still have the
information via the pgmap when we split it out of the
pci_p2pdma_pagemap.
Since everything doing a dma map has to do the pci_p2pdma_state() to
compute PCI_P2PDMA_MAP_* every dma mapping operation has already got
the provider. Since everything is uniform within a mapping operation
the provider is constant for the whole map.
For future non-struct page cases the provider comes along with the
address list from whatever created the address list in the first
place.
Looking at dmabuf for example, I expect dmabuf to provide a new data
structure which is a list of lists:
[[provider GPU: [mmio_addr1,mmio_addr2,mmio_addr3],
[provider NULL: [cpu_addr1, cpu_addr2, ...],
..
]
And each uniform group would be dma map'd on its own using the
embedded provider instead of page->pgmap.
Jason
More information about the Linux-nvme
mailing list