[PATCH v2 24/35] KVM: arm64: Introduce hypercall to force reclaim of a protected page

Will Deacon will at kernel.org
Wed Mar 4 06:08:13 PST 2026


On Thu, Feb 12, 2026 at 05:18:42PM +0000, Alexandru Elisei wrote:
> > diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h
> > index dee1a406b0c2..4cedb720c75d 100644
> > --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h
> > +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h
> > @@ -30,6 +30,12 @@ enum pkvm_page_state {
> >  	 * struct hyp_page.
> >  	 */
> >  	PKVM_NOPAGE			= BIT(0) | BIT(1),
> > +
> > +	/*
> > +	 * 'Meta-states' which aren't encoded directly in the PTE's SW bits (or
> > +	 * the hyp_vmemmap entry for the host)
> > +	 */
> > +	PKVM_POISON			= BIT(2),
> >  };
> >  #define PKVM_PAGE_STATE_MASK		(BIT(0) | BIT(1))
> 
> Looks a bit awkward to me, having the page state encoded using 3 bits, but the
> mask only 2 bits.

It's a little fiddly because we have three ways to track the page state:

1. In the two software bits of the pte mapping the page. This uses
   PKVM_PAGE_STATE_PROT_MASK.

2. In the four bits of each 'struct hyp_page' entry in the
   'hyp_vmemmap'. This means we can avoid fragmenting the host stage-2
   page-table for pages that are shared. These use PKVM_PAGE_STATE_MASK.

3. States derived from an invalid pte that are never stored explicitly.

PKVM_POISON fits into the last category, and so isn't constrained by the
masks.

Perhaps I should rename PKVM_PAGE_STATE_MASK to something like
PKVM_PAGE_STATE_VMEMMAP_MASK to make it clearer?

Will



More information about the linux-arm-kernel mailing list