[PATCH v2 2/5] KVM: arm64: nvhe: Synchronise with page table walker on TLBI

Oliver Upton oliver.upton at linux.dev
Wed Apr 12 17:09:20 PDT 2023


Hi Marc,

On Sat, Apr 08, 2023 at 05:04:24PM +0100, Marc Zyngier wrote:
> A TLBI from EL2 impacting EL1 involves messing with the EL1&0
> translation regime, and the page table walker may still be
> performing speculative walks.
> 
> Piggyback on the existing DSBs to always have a DSB ISH that
> will synchronise all load/store operations that the PTW may
> still have.
> 
> Signed-off-by: Marc Zyngier <maz at kernel.org>
> ---
>  arch/arm64/kvm/hyp/nvhe/tlb.c | 24 +++++++++++++++++++-----
>  1 file changed, 19 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c
> index d296d617f589..e86dd04d49ff 100644
> --- a/arch/arm64/kvm/hyp/nvhe/tlb.c
> +++ b/arch/arm64/kvm/hyp/nvhe/tlb.c
> @@ -17,6 +17,23 @@ struct tlb_inv_context {
>  static void __tlb_switch_to_guest(struct kvm_s2_mmu *mmu,
>  				  struct tlb_inv_context *cxt)
>  {
> +	/*
> +	 * We have two requirements:
> +	 *
> +	 * - ensure that the page table updates are visible to all
> +         *   CPUs, for which a dsb(ishst) is what we need
> +	 *
> +	 * - complete any speculative page table walk started before
> +         *   we trapped to EL2 so that we can mess with the MM
> +         *   registers out of context, for which dsb(nsh) is enough
> +	 *
> +	 * The composition of these two barriers is a dsb(ish). This
> +	 * might be slightly over the top for non-shareable TLBIs, but
> +	 * they are so vanishingly rare that it isn't worth the
> +	 * complexity.
> +	 */
> +	dsb(ish);
> +

Ricardo is carrying a patch for non-shareable TLBIs on permission
relaxation [*], and he's found that it produces some rather desirable
performance improvements. I appreciate the elegance of your approach,
but given what's coming does it make sense to have the TLBI handlers
continue to explicitly perform the appropriate DSB?

[*] https://lore.kernel.org/kvmarm/20230409063000.3559991-14-ricarkol@google.com/

>  	if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
>  		u64 val;
>  
> @@ -60,8 +77,6 @@ void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu,
>  {
>  	struct tlb_inv_context cxt;
>  
> -	dsb(ishst);
> -
>  	/* Switch to requested VMID */
>  	__tlb_switch_to_guest(mmu, &cxt);
>  
> @@ -113,8 +128,6 @@ void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu)
>  {
>  	struct tlb_inv_context cxt;
>  
> -	dsb(ishst);
> -
>  	/* Switch to requested VMID */
>  	__tlb_switch_to_guest(mmu, &cxt);
>  
> @@ -142,7 +155,8 @@ void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu)
>  
>  void __kvm_flush_vm_context(void)
>  {
> -	dsb(ishst);
> +	/* Same remark as in __tblb_switch_to_guest() */

typo: __tlb_switch_to_guest()

> +	dsb(ish);
>  	__tlbi(alle1is);
>  
>  	/*
> -- 
> 2.34.1
> 
> 

-- 
Thanks,
Oliver



More information about the linux-arm-kernel mailing list