[PATCH 7/8] iommu/intel: Support the gfp argument to the map_pages op
Tian, Kevin
kevin.tian at intel.com
Mon Jan 16 19:38:51 PST 2023
> From: Jason Gunthorpe <jgg at nvidia.com>
> Sent: Saturday, January 7, 2023 12:43 AM
>
> @@ -2368,7 +2372,7 @@ static int iommu_domain_identity_map(struct
> dmar_domain *domain,
>
> return __domain_mapping(domain, first_vpfn,
> first_vpfn, last_vpfn - first_vpfn + 1,
> - DMA_PTE_READ|DMA_PTE_WRITE);
> + DMA_PTE_READ|DMA_PTE_WRITE,
> GFP_KERNEL);
> }
Baolu, can you help confirm whether switching from GFP_ATOMIC to
GFP_KERNEL is OK in this path? it looks fine to me in a quick glance
but want to be conservative here.
> @@ -4333,7 +4337,8 @@ static size_t intel_iommu_unmap(struct
> iommu_domain *domain,
>
> /* Cope with horrid API which requires us to unmap more than the
> size argument if it happens to be a large-page mapping. */
> - BUG_ON(!pfn_to_dma_pte(dmar_domain, iova >> VTD_PAGE_SHIFT,
> &level));
> + BUG_ON(!pfn_to_dma_pte(dmar_domain, iova >> VTD_PAGE_SHIFT,
> &level,
> + GFP_ATOMIC));
with level==0 it implies it's only lookup w/o pgtable allocation. From this
angle it reads better to use a more relaxed gfp e.g. GFP_KERNEL here.
> @@ -4392,7 +4397,8 @@ static phys_addr_t
> intel_iommu_iova_to_phys(struct iommu_domain *domain,
> int level = 0;
> u64 phys = 0;
>
> - pte = pfn_to_dma_pte(dmar_domain, iova >> VTD_PAGE_SHIFT,
> &level);
> + pte = pfn_to_dma_pte(dmar_domain, iova >> VTD_PAGE_SHIFT,
> &level,
> + GFP_ATOMIC);
ditto
More information about the ath11k
mailing list