[PATCH 3/3] block-dma: properly take MMIO path
Leon Romanovsky
leon at kernel.org
Mon Oct 20 01:56:48 PDT 2025
On Fri, Oct 17, 2025 at 08:25:19AM +0200, Christoph Hellwig wrote:
> On Fri, Oct 17, 2025 at 08:32:00AM +0300, Leon Romanovsky wrote:
> > From: Leon Romanovsky <leonro at nvidia.com>
> >
> > Make sure that CPU is not synced and IOMMU is configured to take
> > MMIO path by providing newly introduced DMA_ATTR_MMIO attribute.
<...>
> > + if (iter->iter.is_integrity)
> > + bio_integrity(req->bio)->bip_flags |= BIP_MMIO;
> > + else
> > + req->cmd_flags |= REQ_MMIO;
> > + iter->iter.attrs |= DMA_ATTR_MMIO;
>
> REQ_MMIO / BIP_MMIO is not block layer state, but driver state resulting
> from the dma mapping. Reflecting it in block layer data structures
> is not a good idea. This is really something that just needs to be
> communicated outward and recorded in the driver. For nvme I suspect
> two new flags in nvme_iod_flags would be the right place, assuming
> we actually need it. But do we need it? If REQ_/BIP_P2PDMA is set,
> these are always true.
We have three different flows.
1. Regular one, backed by struct page, e.g. dma_map_page()
2. PCI_P2PDMA_MAP_BUS_ADDR - non-DMA flow
3. PCI_P2PDMA_MAP_THRU_HOST_BRIDGE - DMA without struct page, e.g. dma_map_resource()
There is a need for two bits to represent them.
Thanks
More information about the Linux-nvme
mailing list