[PATCH v3 09/17] docs: core-api: document the IOVA-based API
anish kumar
yesanishhere at gmail.com
Sun Nov 10 18:05:35 PST 2024
On Sun, Nov 10, 2024 at 5:50 AM Leon Romanovsky <leon at kernel.org> wrote:
>
> From: Christoph Hellwig <hch at lst.de>
>
> Add an explanation of the newly added IOVA-based mapping API.
>
> Signed-off-by: Christoph Hellwig <hch at lst.de>
> Signed-off-by: Leon Romanovsky <leonro at nvidia.com>
> ---
> Documentation/core-api/dma-api.rst | 70 ++++++++++++++++++++++++++++++
> 1 file changed, 70 insertions(+)
>
> diff --git a/Documentation/core-api/dma-api.rst b/Documentation/core-api/dma-api.rst
> index 8e3cce3d0a23..61d6f4fe3d88 100644
> --- a/Documentation/core-api/dma-api.rst
> +++ b/Documentation/core-api/dma-api.rst
> @@ -530,6 +530,76 @@ routines, e.g.:::
> ....
> }
>
> +Part Ie - IOVA-based DMA mappings
> +---------------------------------
> +
> +These APIs allow a very efficient mapping when using an IOMMU. They are an
"They" doesn't sound nice.
> +optional path that requires extra code and are only recommended for drivers
> +where DMA mapping performance, or the space usage for storing the DMA addresses
> +matter. All the considerations from the previous section apply here as well.
These APIs provide an efficient mapping when using an IOMMU. However, they
are optional and require additional code. They are recommended primarily for
drivers where performance in DMA mapping or the storage space for DMA
addresses are critical. All the considerations discussed in the previous section
also apply in this case.
You can disregard this comment, as anyone reading this paragraph will
understand the intended message.
> +
> +::
> +
> + bool dma_iova_try_alloc(struct device *dev, struct dma_iova_state *state,
> + phys_addr_t phys, size_t size);
> +
> +Is used to try to allocate IOVA space for mapping operation. If it returns
> +false this API can't be used for the given device and the normal streaming
> +DMA mapping API should be used. The ``struct dma_iova_state`` is allocated
> +by the driver and must be kept around until unmap time.
> +
> +::
> +
> + static inline bool dma_use_iova(struct dma_iova_state *state)
> +
> +Can be used by the driver to check if the IOVA-based API is used after a
> +call to dma_iova_try_alloc. This can be useful in the unmap path.
> +
> +::
> +
> + int dma_iova_link(struct device *dev, struct dma_iova_state *state,
> + phys_addr_t phys, size_t offset, size_t size,
> + enum dma_data_direction dir, unsigned long attrs);
> +
> +Is used to link ranges to the IOVA previously allocated. The start of all
> +but the first call to dma_iova_link for a given state must be aligned
> +to the DMA merge boundary returned by ``dma_get_merge_boundary())``, and
> +the size of all but the last range must be aligned to the DMA merge boundary
> +as well.
> +
> +::
> +
> + int dma_iova_sync(struct device *dev, struct dma_iova_state *state,
> + size_t offset, size_t size);
> +
> +Must be called to sync the IOMMU page tables for IOVA-range mapped by one or
> +more calls to ``dma_iova_link()``.
> +
> +For drivers that use a one-shot mapping, all ranges can be unmapped and the
> +IOVA freed by calling:
> +
> +::
> +
> + void dma_iova_destroy(struct device *dev, struct dma_iova_state *state,
> + enum dma_data_direction dir, unsigned long attrs);
> +
> +Alternatively drivers can dynamically manage the IOVA space by unmapping
> +and mapping individual regions. In that case
> +
> +::
> +
> + void dma_iova_unlink(struct device *dev, struct dma_iova_state *state,
> + size_t offset, size_t size, enum dma_data_direction dir,
> + unsigned long attrs);
> +
> +is used to unmap a range previously mapped, and
> +
> +::
> +
> + void dma_iova_free(struct device *dev, struct dma_iova_state *state);
> +
> +is used to free the IOVA space. All regions must have been unmapped using
> +``dma_iova_unlink()`` before calling ``dma_iova_free()``.
>
> Part II - Non-coherent DMA allocations
> --------------------------------------
> --
> 2.47.0
>
>
More information about the Linux-nvme
mailing list