[PATCH v3] KVM: arm64: Check range args for pKVM mem transitions
Sebastian Ene
sebastianene at google.com
Wed Oct 29 23:09:31 PDT 2025
On Thu, Oct 16, 2025 at 05:45:41PM +0100, Vincent Donnefort wrote:
> There's currently no verification for host issued ranges in most of the
> pKVM memory transitions. The end boundary might therefore be subject to
> overflow and later checks could be evaded.
>
> Close this loophole with an additional pfn_range_is_valid() check on a
> per public function basis. Once this check has passed, it is safe to
> convert pfn and nr_pages into a phys_addr_t and a size.
>
> host_unshare_guest transition is already protected via
> __check_host_shared_guest(), while assert_host_shared_guest() callers
> are already ignoring host checks.
>
> Signed-off-by: Vincent Donnefort <vdonnefort at google.com>
>
> ---
>
> v2 -> v3:
> * Test range against PA-range and make the func phys specific.
>
> v1 -> v2:
> * Also check for (nr_pages * PAGE_SIZE) overflow. (Quentin)
> * Rename to check_range_args().
>
> diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> index ddc8beb55eee..49db32f3ddf7 100644
> --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> @@ -367,6 +367,19 @@ static int host_stage2_unmap_dev_all(void)
> return kvm_pgtable_stage2_unmap(pgt, addr, BIT(pgt->ia_bits) - addr);
> }
Hello Vincent,
>
> +/*
> + * Ensure the PFN range is contained within PA-range.
> + *
> + * This check is also robust to overflows and is therefore a requirement before
> + * using a pfn/nr_pages pair from an untrusted source.
> + */
> +static bool pfn_range_is_valid(u64 pfn, u64 nr_pages)
> +{
> + u64 limit = BIT(kvm_phys_shift(&host_mmu.arch.mmu) - PAGE_SHIFT);
> +
> + return pfn < limit && ((limit - pfn) >= nr_pages);
> +}
> +
This newly introduced function is probably fine to be called without the host lock held as long
as no one modifies the vtcr field from the host.mmu structure. While
searching I couldn't find a place where this is directly modified so
this is probably fine.
> struct kvm_mem_range {
> u64 start;
> u64 end;
> @@ -776,6 +789,9 @@ int __pkvm_host_donate_hyp(u64 pfn, u64 nr_pages)
> void *virt = __hyp_va(phys);
> int ret;
>
> + if (!pfn_range_is_valid(pfn, nr_pages))
> + return -EINVAL;
> +
> host_lock_component();
> hyp_lock_component();
>
> @@ -804,6 +820,9 @@ int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages)
> u64 virt = (u64)__hyp_va(phys);
> int ret;
>
> + if (!pfn_range_is_valid(pfn, nr_pages))
> + return -EINVAL;
> +
> host_lock_component();
> hyp_lock_component();
>
> @@ -887,6 +906,9 @@ int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages)
> u64 size = PAGE_SIZE * nr_pages;
> int ret;
>
> + if (!pfn_range_is_valid(pfn, nr_pages))
> + return -EINVAL;
> +
> host_lock_component();
> ret = __host_check_page_state_range(phys, size, PKVM_PAGE_OWNED);
> if (!ret)
> @@ -902,6 +924,9 @@ int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages)
> u64 size = PAGE_SIZE * nr_pages;
> int ret;
>
> + if (!pfn_range_is_valid(pfn, nr_pages))
> + return -EINVAL;
> +
> host_lock_component();
> ret = __host_check_page_state_range(phys, size, PKVM_PAGE_SHARED_OWNED);
> if (!ret)
> @@ -945,6 +970,9 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_pages, struct pkvm_hyp_vcpu
> if (prot & ~KVM_PGTABLE_PROT_RWX)
> return -EINVAL;
>
> + if (!pfn_range_is_valid(pfn, nr_pages))
> + return -EINVAL;
> +
I think we don't need it here because __pkvm_host_share_guest has the
__guest_check_transition_size verification in place which limits
nr_pages.
> ret = __guest_check_transition_size(phys, ipa, nr_pages, &size);
> if (ret)
> return ret;
>
> base-commit: 7ea30958b3054f5e488fa0b33c352723f7ab3a2a
> --
> 2.51.0.869.ge66316f041-goog
>
Other than that this looks good, thanks
Sebastian
More information about the linux-arm-kernel
mailing list