[PATCH v3 09/36] KVM: arm64: Ignore -EAGAIN when mapping in pages for the pKVM host

Fuad Tabba tabba at google.com
Wed Mar 11 03:10:51 PDT 2026


On Thu, 5 Mar 2026 at 14:44, Will Deacon <will at kernel.org> wrote:
>
> If the host takes a stage-2 translation fault on two CPUs at the same
> time, one of them will get back -EAGAIN from the page-table mapping code
> when it runs into the mapping installed by the other.

I ran into this while porting the Android patches, and this patch
fixed the issue. Thank you.

Reviewed-by: Fuad Tabba <tabba at google.com>

/fuad

> Rather than handle this explicitly in handle_host_mem_abort(), pass the
> new KVM_PGTABLE_WALK_IGNORE_EAGAIN flag to kvm_pgtable_stage2_map() from
> __host_stage2_idmap() and return -EEXIST if host_stage2_adjust_range()
> finds a valid pte. This will avoid having to test for -EAGAIN on the
> reclaim path in subsequent patches.
>
> Signed-off-by: Will Deacon <will at kernel.org>
> ---
>  arch/arm64/kvm/hyp/nvhe/mem_protect.c | 21 ++++++++++++++++-----
>  1 file changed, 16 insertions(+), 5 deletions(-)
>
> diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> index 38f66a56a766..7b0f8ad7b20f 100644
> --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> @@ -461,8 +461,15 @@ static bool range_is_memory(u64 start, u64 end)
>  static inline int __host_stage2_idmap(u64 start, u64 end,
>                                       enum kvm_pgtable_prot prot)
>  {
> +       /*
> +        * We don't make permission changes to the host idmap after
> +        * initialisation, so we can squash -EAGAIN to save callers
> +        * having to treat it like success in the case that they try to
> +        * map something that is already mapped.
> +        */
>         return kvm_pgtable_stage2_map(&host_mmu.pgt, start, end - start, start,
> -                                     prot, &host_s2_pool, 0);
> +                                     prot, &host_s2_pool,
> +                                     KVM_PGTABLE_WALK_IGNORE_EAGAIN);
>  }
>
>  /*
> @@ -504,7 +511,7 @@ static int host_stage2_adjust_range(u64 addr, struct kvm_mem_range *range)
>                 return ret;
>
>         if (kvm_pte_valid(pte))
> -               return -EAGAIN;
> +               return -EEXIST;
>
>         if (pte) {
>                 WARN_ON(addr_is_memory(addr) &&
> @@ -609,7 +616,6 @@ void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt)
>  {
>         struct kvm_vcpu_fault_info fault;
>         u64 esr, addr;
> -       int ret = 0;
>
>         esr = read_sysreg_el2(SYS_ESR);
>         if (!__get_fault_info(esr, &fault)) {
> @@ -628,8 +634,13 @@ void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt)
>         BUG_ON(!(fault.hpfar_el2 & HPFAR_EL2_NS));
>         addr = FIELD_GET(HPFAR_EL2_FIPA, fault.hpfar_el2) << 12;
>
> -       ret = host_stage2_idmap(addr);
> -       BUG_ON(ret && ret != -EAGAIN);
> +       switch (host_stage2_idmap(addr)) {
> +       case -EEXIST:
> +       case 0:
> +               break;
> +       default:
> +               BUG();
> +       }
>  }
>
>  struct check_walk_data {
> --
> 2.53.0.473.g4a7958ca14-goog
>



More information about the linux-arm-kernel mailing list