[PATCH 3/3] block-dma: properly take MMIO path
Christoph Hellwig
hch at lst.de
Thu Oct 16 23:25:19 PDT 2025
On Fri, Oct 17, 2025 at 08:32:00AM +0300, Leon Romanovsky wrote:
> From: Leon Romanovsky <leonro at nvidia.com>
>
> Make sure that CPU is not synced and IOMMU is configured to take
> MMIO path by providing newly introduced DMA_ATTR_MMIO attribute.
Please write a commit log that explains this. Where was DMA_ATTR_MMIO
recently introduced? Why? What does this actually fix or improve?
> @@ -184,6 +184,12 @@ static bool blk_dma_map_iter_start(struct request *req, struct device *dma_dev,
> * P2P transfers through the host bridge are treated the
> * same as non-P2P transfers below and during unmap.
> */
> + if (iter->iter.is_integrity)
> + bio_integrity(req->bio)->bip_flags |= BIP_MMIO;
> + else
> + req->cmd_flags |= REQ_MMIO;
> + iter->iter.attrs |= DMA_ATTR_MMIO;
REQ_MMIO / BIP_MMIO is not block layer state, but driver state resulting
from the dma mapping. Reflecting it in block layer data structures
is not a good idea. This is really something that just needs to be
communicated outward and recorded in the driver. For nvme I suspect
two new flags in nvme_iod_flags would be the right place, assuming
we actually need it. But do we need it? If REQ_/BIP_P2PDMA is set,
these are always true.
More information about the Linux-nvme
mailing list