[PATCH 2/3] arm64: Add missing ISB after invalidating TLB in __primary_switch
Mark Rutland
mark.rutland at arm.com
Wed Feb 24 06:06:43 EST 2021
Hi Marc,
On Wed, Feb 24, 2021 at 09:37:37AM +0000, Marc Zyngier wrote:
> Although there has been a bit of back and forth on the subject,
> it appears that invalidating TLBs requires an ISB instruction
> after the TLBI/DSB sequence, as documented in d0b7a302d58a
> ("Revert "arm64: Remove unnecessary ISBs from set_{pte,pmd,pud}"").
That commit describes a different scenario (going faulting->valid
without TLB maintenance), and I don't think that implies anything about
the behaviour in the presence of a TLBI, which is quite different.
Howerver, I do see that commits:
7f0b1bf045113489 ("arm64: Fix barriers used for page table modifications")
51696d346c49c6cf ("arm64: tlb: Ensure we execute an ISB following walk cache invalidation")
... assume that we need an ISB after a TLBI+DSB, so I think it would be
better to refer to those, to avoid conflating the distinct cases.
> Add the missing ISB in __primary_switch, just in case.
>
> Fixes: 3c5e9f238bc4 ("arm64: head.S: move KASLR processing out of __enable_mmu()")
> Suggested-by: Will Deacon <will at kernel.org>
> Signed-off-by: Marc Zyngier <maz at kernel.org>
For consistency with the other kernel TLBI paths, I'm fine with this
(assuming we update the commit message accordingly):
Acked-by: Mark Rutland <mark.rutland at arm.com>
My understanding is that we don't need an ISB after invalidation, and if
we align on that understanding we can follow up and update all of the
TLBI paths in one go.
Thanks,
Mark.
> ---
> arch/arm64/kernel/head.S | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> index 1e30b5550d2a..66b0e0b66e31 100644
> --- a/arch/arm64/kernel/head.S
> +++ b/arch/arm64/kernel/head.S
> @@ -837,6 +837,7 @@ SYM_FUNC_START_LOCAL(__primary_switch)
>
> tlbi vmalle1 // Remove any stale TLB entries
> dsb nsh
> + isb
More information about the linux-arm-kernel
mailing list