[PATCH 1/2] KVM: arm64: Move __adjust_pc out of line
Marc Zyngier
maz at kernel.org
Mon May 10 08:08:51 PDT 2021
On Mon, 10 May 2021 15:55:16 +0100,
Alexandru Elisei <alexandru.elisei at arm.com> wrote:
>
> Hi Marc,
>
> On 5/10/21 10:49 AM, Marc Zyngier wrote:
> > In order to make it easy to call __adjust_pc() from the EL1 code
> > (in the case of nVHE), rename it to __kvm_adjust_pc() and move
> > it out of line.
> >
> > No expected functional change.
>
> It does look to me like they're functionally identical. Minor comments below.
>
> >
> > Signed-off-by: Marc Zyngier <maz at kernel.org>
> > Cc: stable at vger.kernel.org # 5.11
> > ---
> > arch/arm64/include/asm/kvm_asm.h | 2 ++
> > arch/arm64/kvm/hyp/exception.c | 18 +++++++++++++++++-
> > arch/arm64/kvm/hyp/include/hyp/adjust_pc.h | 18 ------------------
> > arch/arm64/kvm/hyp/nvhe/switch.c | 2 +-
> > arch/arm64/kvm/hyp/vhe/switch.c | 2 +-
> > 5 files changed, 21 insertions(+), 21 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
> > index cf8df032b9c3..d5b11037401d 100644
> > --- a/arch/arm64/include/asm/kvm_asm.h
> > +++ b/arch/arm64/include/asm/kvm_asm.h
> > @@ -201,6 +201,8 @@ extern void __kvm_timer_set_cntvoff(u64 cntvoff);
> >
> > extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu);
> >
> > +extern void __kvm_adjust_pc(struct kvm_vcpu *vcpu);
>
> It looks pretty strange to have the file
> arch/arm64/kvm/hyp/include/hyp/adjust_pc.h, but the function
> __kvm_adjust_pc() in another header. I guess this was done because
> arch/arm64/kvm/arm.c will use the function in the next patch.
That's mostly it. __kvm_adjust_pc() is very similar to
__kvm_vcpu_run() in its usage, and I want to keep the patch as small
as possible given that this is a candidate for a stable backport.
> I was thinking that maybe renaming adjust_pc.h->skip_instr.h would
> make more sense, what do you think? I can send a patch on top of
> this series with the rename if you prefer.
Yes, that'd probably be a good cleanup now that __adjust_pc() is gone.
>
> > +
> > extern u64 __vgic_v3_get_gic_config(void);
> > extern u64 __vgic_v3_read_vmcr(void);
> > extern void __vgic_v3_write_vmcr(u32 vmcr);
> > diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c
> > index 73629094f903..0812a496725f 100644
> > --- a/arch/arm64/kvm/hyp/exception.c
> > +++ b/arch/arm64/kvm/hyp/exception.c
> > @@ -296,7 +296,7 @@ static void enter_exception32(struct kvm_vcpu *vcpu, u32 mode, u32 vect_offset)
> > *vcpu_pc(vcpu) = vect_offset;
> > }
> >
> > -void kvm_inject_exception(struct kvm_vcpu *vcpu)
> > +static void kvm_inject_exception(struct kvm_vcpu *vcpu)
> > {
> > if (vcpu_el1_is_32bit(vcpu)) {
> > switch (vcpu->arch.flags & KVM_ARM64_EXCEPT_MASK) {
> > @@ -329,3 +329,19 @@ void kvm_inject_exception(struct kvm_vcpu *vcpu)
> > }
> > }
> > }
> > +
> > +/*
> > + * Adjust the guest PC on entry, depending on flags provided by EL1
>
> This is also called by the VHE code running at EL2, but the comment is reworded in
> the next patch, so it doesn't really matter, and keeping the diff a straight move
> makes it easier to read.
>
> > + * for the purpose of emulation (MMIO, sysreg) or exception injection.
> > + */
> > +void __kvm_adjust_pc(struct kvm_vcpu *vcpu)
> > +{
> > + if (vcpu->arch.flags & KVM_ARM64_PENDING_EXCEPTION) {
> > + kvm_inject_exception(vcpu);
> > + vcpu->arch.flags &= ~(KVM_ARM64_PENDING_EXCEPTION |
> > + KVM_ARM64_EXCEPT_MASK);
> > + } else if (vcpu->arch.flags & KVM_ARM64_INCREMENT_PC) {
> > + kvm_skip_instr(vcpu);
> > + vcpu->arch.flags &= ~KVM_ARM64_INCREMENT_PC;
> > + }
> > +}
> > diff --git a/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h b/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h
> > index 61716359035d..4fdfeabefeb4 100644
> > --- a/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h
> > +++ b/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h
> > @@ -13,8 +13,6 @@
> > #include <asm/kvm_emulate.h>
> > #include <asm/kvm_host.h>
> >
> > -void kvm_inject_exception(struct kvm_vcpu *vcpu);
> > -
> > static inline void kvm_skip_instr(struct kvm_vcpu *vcpu)
> > {
> > if (vcpu_mode_is_32bit(vcpu)) {
> > @@ -43,22 +41,6 @@ static inline void __kvm_skip_instr(struct kvm_vcpu *vcpu)
> > write_sysreg_el2(*vcpu_pc(vcpu), SYS_ELR);
> > }
> >
> > -/*
> > - * Adjust the guest PC on entry, depending on flags provided by EL1
> > - * for the purpose of emulation (MMIO, sysreg) or exception injection.
> > - */
> > -static inline void __adjust_pc(struct kvm_vcpu *vcpu)
> > -{
> > - if (vcpu->arch.flags & KVM_ARM64_PENDING_EXCEPTION) {
> > - kvm_inject_exception(vcpu);
> > - vcpu->arch.flags &= ~(KVM_ARM64_PENDING_EXCEPTION |
> > - KVM_ARM64_EXCEPT_MASK);
> > - } else if (vcpu->arch.flags & KVM_ARM64_INCREMENT_PC) {
> > - kvm_skip_instr(vcpu);
> > - vcpu->arch.flags &= ~KVM_ARM64_INCREMENT_PC;
> > - }
> > -}
> > -
> > /*
> > * Skip an instruction while host sysregs are live.
> > * Assumes host is always 64-bit.
> > diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
> > index e9f6ea704d07..b8ac123c3419 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/switch.c
> > +++ b/arch/arm64/kvm/hyp/nvhe/switch.c
> > @@ -201,7 +201,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
> > */
> > __debug_save_host_buffers_nvhe(vcpu);
> >
> > - __adjust_pc(vcpu);
> > + __kvm_adjust_pc(vcpu);
> >
> > /*
> > * We must restore the 32-bit state before the sysregs, thanks
> > diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
> > index 7b8f7db5c1ed..3eafed0431f5 100644
> > --- a/arch/arm64/kvm/hyp/vhe/switch.c
> > +++ b/arch/arm64/kvm/hyp/vhe/switch.c
> > @@ -132,7 +132,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
> > __load_guest_stage2(vcpu->arch.hw_mmu);
> > __activate_traps(vcpu);
> >
> > - __adjust_pc(vcpu);
> > + __kvm_adjust_pc(vcpu);
>
> With the function now moved to kvm_asm.h, the header include
> adjust_pc.h is not needed. Same for the nvhe version of switch.c.
Well spotted. I'll clean that up.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
More information about the linux-arm-kernel
mailing list