[PATCH v1 07/17] dma-mapping: Implement link/unlink ranges API
Christoph Hellwig
hch at lst.de
Mon Nov 4 04:53:02 PST 2024
On Mon, Nov 04, 2024 at 08:19:24AM -0400, Jason Gunthorpe wrote:
> > That's a good point. Only mapped through host bridge P2P can even
> > end up here, so the address is a perfectly valid physical address
> > in the host. But I'm not sure if all arch_sync_dma_for_device
> > implementations handle IOMMU memory fine.
>
> I was told on x86 if you do a cache flush operation on MMIO there is a
> chance it will MCE. Recently had some similar discussions about ARM
> where it was asserted some platforms may have similar.
On x86 we never flush caches for DMA operations anyway, so x86 isn't
really the concern here, but architectures that do cache incoherent DMA
to PCIe devices. Which isn't a whole lot as most SOCs try to avoid that
for PCIe even if they lack DMA coherent for lesser peripherals, but I bet
there are some on arm/arm64 and maybe riscv or mips.
> It would be safest to only call arch flushing calls on memory that is
> mapped cachable. We can assume that a P2P target is never CPU
> mapped cachable, regardless of how the DMA is routed.
Yes. I.e. force DMA_ATTR_SKIP_CPU_SYNC for P2P.
More information about the Linux-nvme
mailing list