[PATCH 2/2] KVM: arm64: Ensure TLBI uses correct VMID after changing context
Marc Zyngier
maz at kernel.org
Wed Aug 14 06:30:53 PDT 2024
Hi Will,
On Wed, 14 Aug 2024 13:34:29 +0100,
Will Deacon <will at kernel.org> wrote:
>
> When the target context passed to enter_vmid_context() matches the
> current running context, the function returns early without manipulating
> the registers of the stage-2 MMU. This can result in a stale VMID due to
> the lack of an ISB instruction in exit_vmid_context() after writing the
This refers to the exit_vmid_context() *before* a subsequent
enter_vmid_context(), right?
> VTTBR when ARM64_WORKAROUND_SPECULATIVE_AT is not enabled.
I'm not sure I completely get the failure mode, so please bear with me.
>
> For example, with pKVM enabled:
>
> // Initially running in host context
> enter_vmid_context(guest);
> -> __load_stage2(guest); isb // Writes VTCR & VTTBR
> exit_vmid_context(guest);
> -> __load_stage2(host); // Restores VTCR & VTTBR
>
> enter_vmid_context(host);
> -> Returns early as we're already in host context
Is this the code you are referring to?
vcpu = host_ctxt->__hyp_running_vcpu;
[...]
if (vcpu) {
/*
* We're in guest context. However, for this to work, this needs
* to be called from within __kvm_vcpu_run(), which ensures that
* __hyp_running_vcpu is set to the current guest vcpu.
*/
if (mmu == vcpu->arch.hw_mmu || WARN_ON(mmu != host_s2_mmu))
return;
cxt->mmu = vcpu->arch.hw_mmu;
} else {
/* We're in host context. */
if (mmu == host_s2_mmu)
return;
cxt->mmu = host_s2_mmu;
}
So I take it that no vcpu is running at this point, and that we have
done a vcpu_put().
Is there an actual path within pKVM that causes a guest TLBI to be
followed by a host __kvm_tlb_flush_vmid() *without* a CSE? I can't
convinced myself that such a path exist in the current upstream code
base.
> tlbi vmalls12e1is // !!! Can use the stale VMID as we
> // haven't performed context
> // synchronisation since restoring
> // VTTBR.VMID
>
> Add an unconditional ISB instruction to exit_vmid_context() after
> restoring the VTTBR. This already existed for the
> ARM64_WORKAROUND_SPECULATIVE_AT path, so we can simply hoist that onto
> the common path.
>
> Cc: Marc Zyngier <maz at kernel.org>
> Cc: Oliver Upton <oliver.upton at linux.dev>
> Cc: Fuad Tabba <tabba at google.com>
> Fixes: 58f3b0fc3b87 ("KVM: arm64: Support TLB invalidation in guest context")
> Signed-off-by: Will Deacon <will at kernel.org>
> ---
> arch/arm64/kvm/hyp/nvhe/tlb.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c
> index ca3c09df8d7c..48da9ca9763f 100644
> --- a/arch/arm64/kvm/hyp/nvhe/tlb.c
> +++ b/arch/arm64/kvm/hyp/nvhe/tlb.c
> @@ -132,10 +132,10 @@ static void exit_vmid_context(struct tlb_inv_context *cxt)
> else
> __load_host_stage2();
>
> - if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
> - /* Ensure write of the old VMID */
> - isb();
> + /* Ensure write of the old VMID */
> + isb();
>
> + if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
> if (!(cxt->sctlr & SCTLR_ELx_M)) {
> write_sysreg_el1(cxt->sctlr, SYS_SCTLR);
> isb();
The patch itself makes sense to me. I'd just like to understand how we
end-up in that situation.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
More information about the linux-arm-kernel
mailing list