[PATCH v2 1/1] KVM: arm64: Allow cacheable stage 2 mapping using VMA flags

Jason Gunthorpe jgg at nvidia.com
Mon Jan 6 08:51:59 PST 2025


On Fri, Dec 20, 2024 at 04:42:35PM +0100, David Hildenbrand wrote:
> On 18.11.24 14:19, ankita at nvidia.com wrote:
> > From: Ankit Agrawal <ankita at nvidia.com>
> > 
> > Currently KVM determines if a VMA is pointing at IO memory by checking
> > pfn_is_map_memory(). However, the MM already gives us a way to tell what
> > kind of memory it is by inspecting the VMA.
> 
> Do you primarily care about VM_PFNMAP/VM_MIXEDMAP VMAs, or also other VMA
> types?

I think this is exclusively about allowing cachable memory inside a
VM_PFNMAP VMA (created by VFIO) remain cachable inside the guest VM.

> > This patch solves the problems where it is possible for the kernel to
> > have VMAs pointing at cachable memory without causing
> > pfn_is_map_memory() to be true, eg DAX memremap cases and CXL/pre-CXL
> > devices. This memory is now properly marked as cachable in KVM.
> 
> Does this only imply in worse performance, or does this also affect
> correctness? I suspect performance is the problem, correct?

Correctness. Things like atomics don't work on non-cachable mappings.

> Maybe one could just reject such cases (if KVM PFN lookup code not
> already rejects them, which might just be that case IIRC).

At least VFIO enforces SHARED or it won't create the VMA.

drivers/vfio/pci/vfio_pci_core.c:       if ((vma->vm_flags & VM_SHARED) == 0)

This is pretty normal/essential for drivers..

Are you suggesting the VMA flags should be inspected more?
VM_SHARED/PFNMAP before allowing this?

Jason



More information about the linux-arm-kernel mailing list