[PATCH v11 19/43] KVM: arm64: nv: Handle shadow stage 2 page faults
Marc Zyngier
maz at kernel.org
Wed Jan 17 07:53:09 PST 2024
On Wed, 17 Jan 2024 14:53:16 +0000,
Joey Gouly <joey.gouly at arm.com> wrote:
>
> Hi Marc,
>
> Drive by thing I spotted.
>
> On Mon, Nov 20, 2023 at 01:10:03PM +0000, Marc Zyngier wrote:
> > If we are faulting on a shadow stage 2 translation, we first walk the
> > guest hypervisor's stage 2 page table to see if it has a mapping. If
> > not, we inject a stage 2 page fault to the virtual EL2. Otherwise, we
> > create a mapping in the shadow stage 2 page table.
> >
> > Note that we have to deal with two IPAs when we got a shadow stage 2
> > page fault. One is the address we faulted on, and is in the L2 guest
> > phys space. The other is from the guest stage-2 page table walk, and is
> > in the L1 guest phys space. To differentiate them, we rename variables
> > so that fault_ipa is used for the former and ipa is used for the latter.
> >
> > Co-developed-by: Christoffer Dall <christoffer.dall at linaro.org>
> > Co-developed-by: Jintack Lim <jintack.lim at linaro.org>
> > Signed-off-by: Christoffer Dall <christoffer.dall at linaro.org>
> > Signed-off-by: Jintack Lim <jintack.lim at linaro.org>
> > [maz: rewrote this multiple times...]
> > Signed-off-by: Marc Zyngier <maz at kernel.org>
> > ---
> > arch/arm64/include/asm/kvm_emulate.h | 7 +++
> > arch/arm64/include/asm/kvm_nested.h | 19 ++++++
> > arch/arm64/kvm/mmu.c | 89 ++++++++++++++++++++++++----
> > arch/arm64/kvm/nested.c | 48 +++++++++++++++
> > 4 files changed, 153 insertions(+), 10 deletions(-)
> >
> [.. snip ..]
> > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> > index 588ce46c0ad0..41de7616b735 100644
> > --- a/arch/arm64/kvm/mmu.c
> > +++ b/arch/arm64/kvm/mmu.c
> > @@ -1412,14 +1412,16 @@ static bool kvm_vma_mte_allowed(struct vm_area_struct *vma)
> > }
> >
> > static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> > - struct kvm_memory_slot *memslot, unsigned long hva,
> > - unsigned long fault_status)
> > + struct kvm_s2_trans *nested,
> > + struct kvm_memory_slot *memslot,
> > + unsigned long hva, unsigned long fault_status)
> > {
> > int ret = 0;
> > bool write_fault, writable, force_pte = false;
> > bool exec_fault, mte_allowed;
> > bool device = false;
> > unsigned long mmu_seq;
> > + phys_addr_t ipa = fault_ipa;
> > struct kvm *kvm = vcpu->kvm;
> > struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache;
> > struct vm_area_struct *vma;
> > @@ -1504,10 +1506,38 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> > }
> >
> > vma_pagesize = 1UL << vma_shift;
> > +
> > + if (nested) {
> > + unsigned long max_map_size;
> > +
> > + max_map_size = force_pte ? PUD_SIZE : PAGE_SIZE;
>
> This seems like the wrong way around, presumably you want PAGE_SIZE for force_pte?
This is hilarious. I really shouldn't write code these days.
Thanks a lot for spotting this one, I'll fix that right away!
Cheers,
M.
--
Without deviation from the norm, progress is not possible.
More information about the linux-arm-kernel
mailing list