[PATCH v9 0/6] KVM: arm64: Map GPU device memory as cacheable

Donald Dutile ddutile at redhat.com
Wed Jul 2 09:51:06 PDT 2025



On 7/2/25 5:33 AM, Ankit Agrawal wrote:
>> Grace based platforms such as Grace Hopper/Blackwell Superchips have
>> CPU accessible cache coherent GPU memory. The GPU device memory is
>> essentially a DDR memory and retains properties such as cacheability,
>> unaligned accesses, atomics and handling of executable faults. This
>> requires the device memory to be mapped as NORMAL in stage-2.
>>
>> Today KVM forces the memory to either NORMAL or DEVICE_nGnRE depending
>> on whether the memory region is added to the kernel. The KVM code is
>> thus restrictive and prevents device memory that is not added to the
>> kernel to be marked as cacheable. The patch aims to solve this.
>>
>> A cachebility check is made by consulting the VMA pgprot value. If
>> the pgprot mapping type is cacheable, it is considered safe to be
>> mapped cacheable as the KVM S2 will have the same Normal memory type
>> as the VMA has in the S1 and KVM has no additional responsibility
>> for safety.
>>
>> Note when FWB (Force Write Back) is not enabled, the kernel expects to
>> trivially do cache management by flushing the memory by linearly
>> converting a kvm_pte to phys_addr to a KVA. The cache management thus
>> relies on memory being mapped. Since the GPU device memory is not kernel
>> mapped, exit when the FWB is not supported. Similarly, ARM64_HAS_CACHE_DIC
>> allows KVM to avoid flushing the icache and turns icache_inval_pou() into
>> a NOP. So the cacheable PFNMAP is made contingent on these two hardware
>> features.
>>
>> The ability to safely do the cacheable mapping of PFNMAP is exposed
>> through a KVM capability for userspace consumption.
>>
>> The changes are heavily influenced by the discussions among
>> maintainers Marc Zyngier and Oliver Upton besides Jason Gunthorpe,
>> Catalin Marinas, David Hildenbrand, Sean Christopherson [1]. Many
>> thanks for their valuable suggestions.
>>
>> Applied over next-20250610 and tested on the Grace Blackwell
>> platform by booting up VM, loading NVIDIA module [2] and running
>> nvidia-smi in the VM.
>>
>> To run CUDA workloads, there is a dependency on the IOMMUFD and the
>> Nested Page Table patches being worked on separately by Nicolin Chen.
>> (nicolinc at nvidia.com). NVIDIA has provided git repositories which
>> includes all the requisite kernel [3] and Qemu [4] patches in case
>> one wants to try.
>>
>> v8 -> v9
>> 1. Included MIXEDMAP to also be considered for cacheable mapping.
>> (Jason Gunthorpe).
>> 2. Minor text nits (Jason Gunthorpe).
> 
> Humble reminder for review.
> 

Apologies for the delay, I had some issues getting a Grace-Hopper to test on,
and a VM that needed to be adjusted(bigger file system) to run the 12.9.1 CUDA install script.

Anyhow, able to assign a G-H GPU to a VM under qemu-kvm, and in the
guest, successfully perform an 'nvidia-smi'.  Previously, without this patch
series in the host, the nvida-smi command would fail and hang the guest.
(qemu-kvm was qemu-10.0 -based)

If anyone wants more details, or want more tests run, feel free to ask;
I can probably keep the system for another day or two, but I'll have to
give it up by this Friday.

Tested-by: Donald Dutile <ddutile at redhat.com>




More information about the linux-arm-kernel mailing list