[PATCH v4 2/4] iommu: Implement common IOMMU ops for DMA mapping
Catalin Marinas
catalin.marinas at arm.com
Tue Jul 28 09:45:27 PDT 2015
On Thu, Jul 16, 2015 at 07:40:13PM +0100, Robin Murphy wrote:
> +/**
> + * iommu_dma_alloc - Allocate and map a buffer contiguous in IOVA space
> + * @dev: Device to allocate memory for. Must be a real device
> + * attached to an iommu_dma_domain
> + * @size: Size of buffer in bytes
> + * @gfp: Allocation flags
> + * @prot: IOMMU mapping flags
> + * @handle: Out argument for allocated DMA handle
> + * @flush_page: Arch callback to flush a single page from all caches to
> + * ensure DMA visibility of __GFP_ZERO. May be NULL if not
> + * required.
> + *
> + * If @size is less than PAGE_SIZE, then a full CPU page will be allocated,
> + * but an IOMMU which supports smaller pages might not map the whole thing.
> + *
> + * Return: Array of struct page pointers describing the buffer,
> + * or NULL on failure.
> + */
> +struct page **iommu_dma_alloc(struct device *dev, size_t size,
> + gfp_t gfp, int prot, dma_addr_t *handle,
> + void (*flush_page)(const void *, phys_addr_t))
> +{
> + struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
> + struct iova_domain *iovad = domain->dma_api_cookie;
> + struct iova *iova;
> + struct page **pages;
> + struct sg_table sgt;
> + dma_addr_t dma_addr;
> + unsigned int count = PAGE_ALIGN(size) >> PAGE_SHIFT;
> +
> + *handle = DMA_ERROR_CODE;
> +
> + pages = __iommu_dma_alloc_pages(count, gfp);
> + if (!pages)
> + return NULL;
> +
> + iova = __alloc_iova(iovad, size, dev->coherent_dma_mask);
> + if (!iova)
> + goto out_free_pages;
> +
> + size = iova_align(iovad, size);
> + if (sg_alloc_table_from_pages(&sgt, pages, count, 0, size, GFP_KERNEL))
> + goto out_free_iova;
> +
> + if (gfp & __GFP_ZERO) {
> + struct sg_mapping_iter miter;
> + /*
> + * The flushing provided by the SG_MITER_TO_SG flag only
> + * applies to highmem pages, so it won't do the job here.
> + */
The comment here is slightly wrong. The SG_MITER_FROM_SG flushing is
done by calling flush_kernel_dcache_page(). This function can be no-op
even on architectures implementing highmem. For example, on arm32 it
only does some flushing if there is a potential D-cache alias with user
space. The flush that you call below is for DMA operations, something
entirely different.
> + sg_miter_start(&miter, sgt.sgl, sgt.orig_nents, SG_MITER_FROM_SG);
> + while (sg_miter_next(&miter)) {
> + memset(miter.addr, 0, PAGE_SIZE);
Isn't __GFP_ZERO already honoured by alloc_pages in
__iommu_dma_alloc_pages?
> + if (flush_page)
> + flush_page(miter.addr, page_to_phys(miter.page));
Could you instead do the check as !(prot & IOMEM_CACHE) so that it saves
architecture code from passing NULL when coherent?
I'm still not convinced we need this flushing here but I'll follow up in
patch 3/4.
--
Catalin
More information about the linux-arm-kernel
mailing list