[PATCH v2 2/4] iommu: Implement common IOMMU ops for DMA mapping
Robin Murphy
robin.murphy at arm.com
Fri Jul 3 09:44:03 PDT 2015
On 03/07/15 10:27, Yong Wu wrote:
[...]
>> +/**
>> + * iommu_dma_alloc - Allocate and map a buffer contiguous in IOVA space
>> + * @dev: Device to allocate memory for. Must be a real device
>> + * attached to an iommu_dma_domain
>> + * @size: Size of buffer in bytes
>> + * @gfp: Allocation flags
>> + * @prot: IOMMU mapping flags
>> + * @coherent: Which dma_mask to base IOVA allocation on
>> + * @handle: Out argument for allocated DMA handle
>> + * @flush_page: Arch callback to flush a single page from caches as
>> + * necessary. May be NULL for coherent allocations
>> + *
>> + * If @size is less than PAGE_SIZE, then a full CPU page will be allocated,
>> + * but an IOMMU which supports smaller pages might not map the whole thing.
>> + * For now, the buffer is unconditionally zeroed for compatibility
>> + *
>> + * Return: Array of struct page pointers describing the buffer,
>> + * or NULL on failure.
>> + */
>> +struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp,
>> + int prot, bool coherent, dma_addr_t *handle,
>> + void (*flush_page)(const void *, phys_addr_t))
>> +{
>> + struct iommu_dma_domain *dom = arch_get_dma_domain(dev);
>> + struct iova_domain *iovad = dom->iovad;
>> + struct iova *iova;
>> + struct page **pages;
>> + struct sg_table sgt;
>> + struct sg_mapping_iter miter;
>> + dma_addr_t dma_addr;
>> + unsigned int count = PAGE_ALIGN(size) >> PAGE_SHIFT;
>> +
>> + *handle = DMA_ERROR_CODE;
>
> Hi Robin,
>
> Compare with the dma before, You delete this line here.
>
> size = PAGE_ALIGN(size);
>
> Do you expect the user should make sure the size must be aligned?
>
> so do __iommu_free_attrs.
Yeah, there is an oversight here due to some back-and-forth refactoring
- both the page allocation and the IOVA allocation do get suitably
aligned as they should, but the segments of the temporary scatterlist
don't. Somehow I managed not to hit an iommu_map_sg failure due to that
until some time after this posting (it seems most of the drivers I test
with only make page-sized allocations...)
I've currently got the fixup below waiting for the next posting.
Robin.
>> +
>> + /* IOMMU can map any pages, so himem can also be used here */
>> + gfp |= __GFP_NOWARN | __GFP_HIGHMEM;
>> + pages = __iommu_dma_alloc_pages(count, gfp);
>> + if (!pages)
>> + return NULL;
>> +
>> + iova = __alloc_iova(dev, size, coherent);
>> + if (!iova)
>> + goto out_free_pages;
>> +
+ size = iova_align(iovad, size);
>> + if (sg_alloc_table_from_pages(&sgt, pages, count, 0, size, GFP_KERNEL))
>> + goto out_free_iova;
>> +
>> + dma_addr = iova_dma_addr(iovad, iova);
>> + if (iommu_map_sg(dom->domain, dma_addr, sgt.sgl, sgt.orig_nents, prot)
>> + < size)
>> + goto out_free_sg;
>> +
>> + /* Using the non-flushing flag since we're doing our own */
>> + sg_miter_start(&miter, sgt.sgl, sgt.orig_nents, SG_MITER_FROM_SG);
>> + while (sg_miter_next(&miter)) {
>> + memset(miter.addr, 0, PAGE_SIZE);
>> + if (flush_page)
>> + flush_page(miter.addr, page_to_phys(miter.page));
>> + }
>> + sg_miter_stop(&miter);
>> + sg_free_table(&sgt);
>> +
>> + *handle = dma_addr;
>> + return pages;
>> +
>> +out_free_sg:
>> + sg_free_table(&sgt);
>> +out_free_iova:
>> + __free_iova(iovad, iova);
>> +out_free_pages:
>> + __iommu_dma_free_pages(pages, count);
>> + return NULL;
>> +}
>> +
> [...]
>> +
>> +#endif /* __KERNEL__ */
>> +#endif /* __DMA_IOMMU_H */
>
>
More information about the linux-arm-kernel
mailing list