[PATCH 8/8] KVM: arm64: Avoid repetitive stack access on host EL1 to EL2 exception
Alexandru Elisei
alexandru.elisei at arm.com
Mon Nov 2 11:28:39 EST 2020
Hi Marc,
On 10/26/20 9:51 AM, Marc Zyngier wrote:
> Registers x0/x1 get repeateadly pushed and poped during a host
> HVC call. Instead, leave the registers on the stack, saving
> a store instruction on the fast path for an add on the slow path.
>
> Signed-off-by: Marc Zyngier <maz at kernel.org>
> ---
> arch/arm64/kvm/hyp/nvhe/host.S | 5 ++---
> 1 file changed, 2 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S
> index e2d316d13180..7b69f9ff8da0 100644
> --- a/arch/arm64/kvm/hyp/nvhe/host.S
> +++ b/arch/arm64/kvm/hyp/nvhe/host.S
> @@ -13,8 +13,6 @@
> .text
>
> SYM_FUNC_START(__host_exit)
> - stp x0, x1, [sp, #-16]!
> -
> get_host_ctxt x0, x1
>
> /* Store the host regs x2 and x3 */
> @@ -99,13 +97,14 @@ SYM_FUNC_END(__hyp_do_panic)
> mrs x0, esr_el2
> lsr x0, x0, #ESR_ELx_EC_SHIFT
> cmp x0, #ESR_ELx_EC_HVC64
> - ldp x0, x1, [sp], #16
> + ldp x0, x1, [sp] // Don't fixup the stack yet
If I understand get_host_ctxt correctly, it will clobber x0 and x1, and this is
the first thing that __host_exit does. I think that the values of x0 and x1 are
only needed in host_el1_sync_vect: x0 to compare with HVC_STUB_HCALL_NR below, and
x1 for the call to __kvm_handle_stub_hvc. I was thinking that we can restore x0
just before the comparison with HVC_STUB_HCALL_NR, after the first branch to
__host_exit, to make it clear that it is not used by __host_exit. Not really
important, but it might make the code a bit easier to understand (it looks a bit
weird to me to have x0, x1 clobbered immediately after we restore them from the
stack).
Either way you prefer, the code looks correct to me: __host_exit assumes that x0
and x1 are at the top of the stack when it saves them, and the ADD in
host_el1_sync_vect (when the code doesn't branch to __host_exit) makes sure the
stack pointer is as expected:
Reviewed-by: Alexandru Elisei <alexandru.elisei at arm.com>
Thanks,
Alex
> b.ne __host_exit
>
> /* Check for a stub HVC call */
> cmp x0, #HVC_STUB_HCALL_NR
> b.hs __host_exit
>
> + add sp, sp, #16
> /*
> * Compute the idmap address of __kvm_handle_stub_hvc and
> * jump there. Since we use kimage_voffset, do not use the
More information about the linux-arm-kernel
mailing list