[PATCH v9 0/6] KVM: arm64: Map GPU device memory as cacheable

Ankit Agrawal ankita at nvidia.com
Wed Jul 2 02:33:06 PDT 2025


> Grace based platforms such as Grace Hopper/Blackwell Superchips have
> CPU accessible cache coherent GPU memory. The GPU device memory is
> essentially a DDR memory and retains properties such as cacheability,
> unaligned accesses, atomics and handling of executable faults. This
> requires the device memory to be mapped as NORMAL in stage-2.
>
> Today KVM forces the memory to either NORMAL or DEVICE_nGnRE depending
> on whether the memory region is added to the kernel. The KVM code is
> thus restrictive and prevents device memory that is not added to the
> kernel to be marked as cacheable. The patch aims to solve this.
>
> A cachebility check is made by consulting the VMA pgprot value. If
> the pgprot mapping type is cacheable, it is considered safe to be
> mapped cacheable as the KVM S2 will have the same Normal memory type
> as the VMA has in the S1 and KVM has no additional responsibility
> for safety.
>
> Note when FWB (Force Write Back) is not enabled, the kernel expects to
> trivially do cache management by flushing the memory by linearly
> converting a kvm_pte to phys_addr to a KVA. The cache management thus
> relies on memory being mapped. Since the GPU device memory is not kernel
> mapped, exit when the FWB is not supported. Similarly, ARM64_HAS_CACHE_DIC
> allows KVM to avoid flushing the icache and turns icache_inval_pou() into
> a NOP. So the cacheable PFNMAP is made contingent on these two hardware
> features.
>
> The ability to safely do the cacheable mapping of PFNMAP is exposed
> through a KVM capability for userspace consumption.
>
> The changes are heavily influenced by the discussions among
> maintainers Marc Zyngier and Oliver Upton besides Jason Gunthorpe,
> Catalin Marinas, David Hildenbrand, Sean Christopherson [1]. Many
> thanks for their valuable suggestions.
>
> Applied over next-20250610 and tested on the Grace Blackwell
> platform by booting up VM, loading NVIDIA module [2] and running
> nvidia-smi in the VM.
>
> To run CUDA workloads, there is a dependency on the IOMMUFD and the
> Nested Page Table patches being worked on separately by Nicolin Chen.
> (nicolinc at nvidia.com). NVIDIA has provided git repositories which
> includes all the requisite kernel [3] and Qemu [4] patches in case
> one wants to try.
>
> v8 -> v9
> 1. Included MIXEDMAP to also be considered for cacheable mapping.
> (Jason Gunthorpe).
> 2. Minor text nits (Jason Gunthorpe).

Humble reminder for review.


More information about the linux-arm-kernel mailing list