[PATCH v3 4/20] PCI/P2PDMA: introduce helpers for dma_map_sg implementations
Jason Gunthorpe
jgg at nvidia.com
Wed Sep 29 16:40:45 PDT 2021
On Wed, Sep 29, 2021 at 05:00:43PM -0600, Logan Gunthorpe wrote:
>
>
>
> On 2021-09-29 4:46 p.m., Jason Gunthorpe wrote:
> > On Wed, Sep 29, 2021 at 03:30:42PM -0600, Logan Gunthorpe wrote:
> >> On 2021-09-28 4:05 p.m., Jason Gunthorpe wrote:
> >> No, that's not a correct reading of the code. Every time there is a new
> >> pagemap, this code calculates the mapping type and bus offset. If a page
> >> comes along with a different page map,f it recalculates. This just
> >> reduces the overhead so that the calculation is done only every time a
> >> page with a different pgmap comes along and not doing it for every
> >> single page.
> >
> > Each 'struct scatterlist *sg' refers to a range of contiguous pfns
> > starting at page_to_pfn(sg_page()) and going for approx sg->length/PAGE_SIZE
> > pfns long.
> >
>
> Ugh, right. A bit contrived for consecutive pages to have different
> pgmaps and still be next to each other in a DMA transaction. But I guess
> it is technically possible and should be protected against.
I worry it is something a hostile userspace could cookup using mmap
and cause some kind of kernel integrity problem with.
> > @@ -470,7 +470,8 @@ int sg_alloc_append_table_from_pages(struct sg_append_table *sgt_append,
> >
> > /* Merge contiguous pages into the last SG */
> > prv_len = sgt_append->prv->length;
> > - while (n_pages && page_to_pfn(pages[0]) == paddr) {
> > + while (n_pages && page_to_pfn(pages[0]) == paddr &&
> > + sg_page(sgt_append->prv)->pgmap == pages[0]->pgmap) {
>
> I don't think it's correct to use pgmap without first checking if it is
> a zone device page. But your point is taken. I'll try to address this.
Yes
Jason
More information about the Linux-nvme
mailing list