[PATCH v1 09/17] docs: core-api: document the IOVA-based API
Randy Dunlap
rdunlap at infradead.org
Wed Oct 30 18:41:21 PDT 2024
(nits)
On 10/30/24 8:12 AM, Leon Romanovsky wrote:
> From: Christoph Hellwig <hch at lst.de>
>
> Add an explanation of the newly added IOVA-based mapping API.
>
> Signed-off-by: Christoph Hellwig <hch at lst.de>
> Signed-off-by: Leon Romanovsky <leonro at nvidia.com>
> ---
> Documentation/core-api/dma-api.rst | 70 ++++++++++++++++++++++++++++++
> 1 file changed, 70 insertions(+)
>
> diff --git a/Documentation/core-api/dma-api.rst b/Documentation/core-api/dma-api.rst
> index 8e3cce3d0a23..6095696a65a7 100644
> --- a/Documentation/core-api/dma-api.rst
> +++ b/Documentation/core-api/dma-api.rst
> @@ -530,6 +530,76 @@ routines, e.g.:::
> ....
> }
>
> +Part Ie - IOVA-based DMA mappings
> +---------------------------------
> +
> +These APIs allow a very efficient mapping when using an IOMMU. They are an
> +optional path that requires extra code and are only recommended for drivers
> +where DMA mapping performance, or the space usage for storing the DMA addresses
> +matter. All the consideration from the previous section apply here as well.
considerations
> +
> +::
> +
> + bool dma_iova_try_alloc(struct device *dev, struct dma_iova_state *state,
> + phys_addr_t phys, size_t size);
> +
> +Is used to try to allocate IOVA space for mapping operation. If it returns
> +false this API can't be used for the given device and the normal streaming
> +DMA mapping API should be used. The ``struct dma_iova_state`` is allocated
> +by the driver and must be kept around until unmap time.
> +
> +::
> +
> + static inline bool dma_use_iova(struct dma_iova_state *state)
> +
> +Can be used by the driver to check if the IOVA-based API is used after a
> +call to dma_iova_try_alloc. This can be useful in the unmap path.
> +
> +::
> +
> + int dma_iova_link(struct device *dev, struct dma_iova_state *state,
> + phys_addr_t phys, size_t offset, size_t size,
> + enum dma_data_direction dir, unsigned long attrs);
> +
> +Is used to link ranges to the IOVA previously allocated. The start of all
> +but the first call to dma_iova_link for a given state must be aligned
> +to the DMA merge boundary returned by ``dma_get_merge_boundary())``, and
> +the size of all but the last range must be aligned to the DMA merge boundary
> +as well.
> +
> +::
> +
> + int dma_iova_sync(struct device *dev, struct dma_iova_state *state,
> + size_t offset, size_t size);
> +
> +Must be called to sync the IOMMU page tables for IOVA-range mapped by one or
> +more calls to ``dma_iova_link()``.
> +
> +For drivers that use a one-shot mapping, all ranges can be unmapped and the
> +IOVA freed by calling:
> +
> +::
> +
> + void dma_iova_destroy(struct device *dev, struct dma_iova_state *state,
> + enum dma_data_direction dir, unsigned long attrs);
> +
> +Alternatively drivers can dynamically manage the IOVA space by unmapping
> +and mapping individual regions. In that case
> +
> +::
> +
> + void dma_iova_unlink(struct device *dev, struct dma_iova_state *state,
> + size_t offset, size_t size, enum dma_data_direction dir,
> + unsigned long attrs);
> +
> +is used to unmap a range previous mapped, and
previously
> +
> +::
> +
> + void dma_iova_free(struct device *dev, struct dma_iova_state *state);
> +
> +is used to free the IOVA space. All regions must have been unmapped using
> +``dma_iova_unlink()`` before calling ``dma_iova_free()``.
>
> Part II - Non-coherent DMA allocations
> --------------------------------------
--
~Randy
More information about the Linux-nvme
mailing list