[PATCH v1 2/2] nvme-pci: Fix iommu map (via swiotlb) failures when PAGE_SIZE=64KB

Nicolin Chen nicolinc at nvidia.com
Tue Feb 13 22:09:19 PST 2024


On Tue, Feb 13, 2024 at 04:31:04PM -0700, Keith Busch wrote:
> On Tue, Feb 13, 2024 at 01:53:57PM -0800, Nicolin Chen wrote:
> > @@ -2967,7 +2967,7 @@ static struct nvme_dev *nvme_pci_alloc_dev(struct pci_dev *pdev,
> >               dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(48));
> >       else
> >               dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));
> > -     dma_set_min_align_mask(&pdev->dev, NVME_CTRL_PAGE_SIZE - 1);
> > +     dma_set_min_align_mask(&pdev->dev, PAGE_SIZE - 1);
> >       dma_set_max_seg_size(&pdev->dev, 0xffffffff);
> 
> I recall we had to do this for POWER because they have 64k pages, but
> page aligned addresses IOMMU map to 4k, so we needed to allow the lower
> dma alignment to efficiently use it.

Thanks for the input!

In that case, we might have to rely on iovad->granule from the
attached iommu_domain:

+static size_t iommu_dma_max_mapping_size(struct device *dev)
+{
+       struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
+
+       if (!domain || !is_swiotlb_active(dev) || !dev_is_untrusted(dev))
+               return SIZE_MAX;
+       return ALIGN_DOWN(swiotlb_max_mapping_size(dev),
+                         domain->iova_cookie->iovad.granule);
+}

With this in PATCH-1, we can drop PATCH-2.

Thanks
Nicolin



More information about the Linux-nvme mailing list