[PATCH v2 04/13] arm64: mm: Push __TLBI_VADDR() into __tlbi_level()

Jonathan Cameron jonathan.cameron at huawei.com
Tue Jan 27 03:37:58 PST 2026


On Mon, 19 Jan 2026 17:21:51 +0000
Ryan Roberts <ryan.roberts at arm.com> wrote:

> From: Will Deacon <will at kernel.org>
> 
> The __TLBI_VADDR() macro takes an ASID and an address and converts them
> into a single argument formatted correctly for a TLB invalidation
> instruction.
> 
> Rather than have callers worry about this (especially in the case where
> the ASID is zero), push the macro down into __tlbi_level() via a new
> __tlbi_level_asid() helper.
> 
> Signed-off-by: Will Deacon <will at kernel.org>
> Reviewed-by: Linu Cherian <linu.cherian at arm.com>
> Signed-off-by: Ryan Roberts <ryan.roberts at arm.com>
One comment inline, but not particularly important given it's
about reducing readability of a workaround a little

Reviewed-by: Jonathan Cameron <jonathan.cameron at huawei.com>

> @@ -674,6 +679,7 @@ static inline bool huge_pmd_needs_flush(pmd_t oldpmd, pmd_t newpmd)
>  #define huge_pmd_needs_flush huge_pmd_needs_flush
>  
>  #undef __tlbi_user
> +#undef __TLBI_VADDR
>  #endif
>  
>  #endif
> diff --git a/arch/arm64/kernel/sys_compat.c b/arch/arm64/kernel/sys_compat.c
> index 4a609e9b65de..ad4857df4830 100644
> --- a/arch/arm64/kernel/sys_compat.c
> +++ b/arch/arm64/kernel/sys_compat.c
> @@ -36,7 +36,7 @@ __do_compat_cache_op(unsigned long start, unsigned long end)
>  			 * The workaround requires an inner-shareable tlbi.
>  			 * We pick the reserved-ASID to minimise the impact.
>  			 */
> -			__tlbi(aside1is, __TLBI_VADDR(0, 0));
> +			__tlbi(aside1is, 0UL);

Dropping the explicit ASID sort of looses some meaning here vs the comment just
above it.  Meh, it's in a work around so most folk will ignore it anyway
if reading this code, so I don't mind that much.

>  			dsb(ish);
>  		}
>  




More information about the linux-arm-kernel mailing list