[PATCH v8 5/6] KVM: arm64: Allow cacheable stage 2 mapping using VMA flags

Jason Gunthorpe jgg at nvidia.com
Fri Jun 20 05:20:16 PDT 2025


On Fri, Jun 20, 2025 at 12:09:45PM +0000, ankita at nvidia.com wrote:
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index d8d2eb8a409e..48a5402706c3 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1683,16 +1683,62 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  
>  	if (vm_flags & (VM_PFNMAP | VM_MIXEDMAP) && !pfn_is_map_memory(pfn)) {
>  		/*
> -		 * If the page was identified as device early by looking at
> -		 * the VMA flags, vma_pagesize is already representing the
> -		 * largest quantity we can map.  If instead it was mapped
> -		 * via __kvm_faultin_pfn(), vma_pagesize is set to PAGE_SIZE
> -		 * and must not be upgraded.
> -		 *
> -		 * In both cases, we don't let transparent_hugepage_adjust()
> -		 * change things at the last minute.
> +		 * This is non-struct page memory PFN, and cannot support
> +		 * CMOs. It could potentially be unsafe to access as cachable.
>  		 */
> -		s2_force_noncacheable = true;
> +		bool cacheable_pfnmap = false;
> +
> +		if (vm_flags & VM_PFNMAP) {

I think this same logic works equally well for MIXEDMAP. A cachable
MIXEDMAP should follow the same rules for PFNMAP for the non-normal
pages within it. IOW, just remove this if, it was already done above.

> +			/*
> +			 * COW VM_PFNMAP is possible when doing a MAP_PRIVATE
> +			 * /dev/mem mapping on systems that allow such mapping.
> +			 * Reject such case.
> +			 */

This is where a COW mapping come from, but it doesn't explain why KVM
has a problem here?

> +			if (is_cow_mapping(vm_flags))
> +				return -EINVAL;
> +
> +			/*
> +			 * Check if the VMA owner considers the physical address
> +			 * safe to be mapped cacheable.
> +			 */
> +			if (is_vma_cacheable)
> +				cacheable_pfnmap = true;
> +		}
> +
> +		if (cacheable_pfnmap) {

If the vm_flags test is removed then this is just is_vma_cacheable

> +			/*
> +			 * Whilst the VMA owner expects cacheable mapping to this
> +			 * PFN, hardware also has to support the FWB and CACHE DIC
> +			 * features.
> +			 *
> +			 * ARM64 KVM relies on kernel VA mapping to the PFN to
> +			 * perform cache maintenance as the CMO instructions work on
> +			 * virtual addresses. VM_PFNMAP region are not necessarily
> +			 * mapped to a KVA and hence the presence of hardware features
> +			 * S2FWB and CACHE DIC is mandatory for cache maintenance.

"are mandatory to avoid any cache maintenance"

Jason



More information about the linux-arm-kernel mailing list