[PATCH] arm64/mm: harden ASID allocator against empty bitmap after rollover

Catalin Marinas catalin.marinas at arm.com
Tue Mar 10 10:24:42 PDT 2026


On Thu, Feb 19, 2026 at 11:37:14AM +0000, Reda CHERKAOUI wrote:
> new_context() assumes that after incrementing asid_generation and calling
> flush_context(), find_next_zero_bit() will always find a free ASID.
> 
> If that invariant is ever violated, __set_bit(NUM_USER_ASIDS, asid_map)
> would write past the end of the bitmap. Add a defensive check so the
> kernel fails loudly instead of silently corrupting memory.
> Cc: stable at vger.kernel.org
> 
> Signed-off-by: Reda CHERKAOUI <redacherkaoui67 at gmail.com>
> ---
>  arch/arm64/mm/context.c | 12 +++++++++---
>  1 file changed, 9 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
> index b2ac06246327..74c1ece7db78 100644
> --- a/arch/arm64/mm/context.c
> +++ b/arch/arm64/mm/context.c
> @@ -160,6 +160,7 @@ static u64 new_context(struct mm_struct *mm)
>  	static u32 cur_idx = 1;
>  	u64 asid = atomic64_read(&mm->context.id);
>  	u64 generation = atomic64_read(&asid_generation);
> +	unsigned long idx;
>  
>  	if (asid != 0) {
>  		u64 newasid = asid2ctxid(ctxid2asid(asid), generation);
> @@ -194,9 +195,11 @@ static u64 new_context(struct mm_struct *mm)
>  	 * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd
>  	 * pairs.
>  	 */
> -	asid = find_next_zero_bit(asid_map, NUM_USER_ASIDS, cur_idx);
> -	if (asid != NUM_USER_ASIDS)
> +	idx = find_next_zero_bit(asid_map, NUM_USER_ASIDS, cur_idx);
> +	if (idx != NUM_USER_ASIDS) {
> +		asid = idx;
>  		goto set_asid;
> +	}
>  
>  	/* We're out of ASIDs, so increment the global generation count */
>  	generation = atomic64_add_return_relaxed(ASID_FIRST_VERSION,
> @@ -204,7 +207,10 @@ static u64 new_context(struct mm_struct *mm)
>  	flush_context();
>  
>  	/* We have more ASIDs than CPUs, so this will always succeed */
> -	asid = find_next_zero_bit(asid_map, NUM_USER_ASIDS, 1);
> +	idx = find_next_zero_bit(asid_map, NUM_USER_ASIDS, 1);
> +	if (unlikely(idx == NUM_USER_ASIDS))
> +		panic("ASID allocator: no free ASIDs after rollover\n");
> +	asid = idx;

How do you even hit this? Is it if you have less ASIDs than the number
of CPUs? The kernel complains about this in asids_update_limit.

Anyway, given how you are not following up on maintainer's comments, I
assume these patches are automatically generated.

-- 
Catalin



More information about the linux-arm-kernel mailing list