[PATCH] arm64: uaccess: simplify uaccess_mask_ptr()

Robin Murphy robin.murphy at arm.com
Thu Sep 22 09:46:13 PDT 2022


On 22/09/2022 4:10 pm, Mark Rutland wrote:
> We introduced uaccess pointer masking for arm64 in commit:
> 
>    4d8efc2d5ee4c9cc ("arm64: Use pointer masking to limit uaccess speculation")
> 
> Which was intended to prevent speculative uaccesses to kernel memory on
> CPUs where access permissions were not respected under speculation.
> 
> At the time, the uaccess primitives were occasionally used to access
> kernel memory, with the maximum permitted address held in
> thread_info::addr_limit. Consequently, the address masking needed to
> take this dynamic limit into account.
> 
> Subsequently the uaccess primitives were reworked such that they are
> only used for user memory, and as of commit:
> 
>    3d2403fd10a1dbb3 ("arm64: uaccess: remove set_fs()")
> 
> ... the address limit was made a compile-time constant, but the logic
> was otherwise unchanged.
> 
> Regardless of the configured VA size or whether TBI is in use, the
> address space can be divided into three ranges:
> 
> * The TTBR0 VA range, for which any valid pointer has bit 55 *clear*,
>    and any non-tag bits [63-56] must match bit 55 (i.e. must be clear).
> 
> * The TTBR1 VA range, for which any valid pointer has bit 55 *set*, and
>    any non-tag bits [63-56] must match bit 55 (i.e. must be set).
> 
> * The gap between the TTBR0 and TTBR1 ranges, where bit 55 may be set or
>    clear, but any access will result in a fault.
> 
> As the uaccess primitives are now only used for user memory in the TTBR0
> VA range, we can prevent generation of TTBR1 addresses by clearing bit
> 55, which will either result in a TTBR0 address or a faulting address
> between the TTBR VA ranges.
> 
> This is beneficial for code generation as:
> 
> * We no longer clobber the condition codes.
> 
> * We no longer burn a register on (TASK_SIZE_MAX - 1).
> 
> * We no longer need to consume the untagged pointer.
> 
> When building a defconfig v6.0-rc3 with GCC 12.1.0, this change makes
> the resulting Image 64KiB smaller.

I have a vague feeling that there was some thought behind sanitising to 
specifically NULL - otherwise even the original patch could have used a 
single AND rather than the BICS/CSEL - but it was probably just the 
overwhelming uncertainty of everything at that time, wherein we could at 
least reason that maximising the chance of forcing malicious speculation 
into a fault seemed safest. By now, though, I'm a bit more confident in 
agreeing that any non-kernel address seems OK for uaccess context.

Reviewed-by: Robin Murphy <robin.murphy at arm.com>

> Signed-off-by: Mark Rutland <mark.rutland at arm.com>
> Cc: Catalin Marinas <catalin.marinas at arm.com>
> Cc: James Morse <james.morse at arm.com>
> Cc: Robin Murphy <robin.murphy at arm.com>
> Cc: Will Deacon <will at kernel.org>
> ---
>   arch/arm64/include/asm/uaccess.h | 19 ++++++++++---------
>   1 file changed, 10 insertions(+), 9 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
> index 2fc9f0861769a..e69559826cb8c 100644
> --- a/arch/arm64/include/asm/uaccess.h
> +++ b/arch/arm64/include/asm/uaccess.h
> @@ -203,9 +203,11 @@ static inline void uaccess_enable_privileged(void)
>   }
>   
>   /*
> - * Sanitise a uaccess pointer such that it becomes NULL if above the maximum
> - * user address. In case the pointer is tagged (has the top byte set), untag
> - * the pointer before checking.
> + * Sanitize a uaccess pointer such that it cannot reach any kernel address.
> + *
> + * Clearing bit 55 ensures the pointer cannot address any portion of the TTBR1
> + * address range (i.e. any kernel address), and either the pointer falls within
> + * the TTBR0 address range or must cause a fault.
>    */
>   #define uaccess_mask_ptr(ptr) (__typeof__(ptr))__uaccess_mask_ptr(ptr)
>   static inline void __user *__uaccess_mask_ptr(const void __user *ptr)
> @@ -213,12 +215,11 @@ static inline void __user *__uaccess_mask_ptr(const void __user *ptr)
>   	void __user *safe_ptr;
>   
>   	asm volatile(
> -	"	bics	xzr, %3, %2\n"
> -	"	csel	%0, %1, xzr, eq\n"
> -	: "=&r" (safe_ptr)
> -	: "r" (ptr), "r" (TASK_SIZE_MAX - 1),
> -	  "r" (untagged_addr(ptr))
> -	: "cc");
> +	"	bic	%0, %1, %2\n"
> +	: "=r" (safe_ptr)
> +	: "r" (ptr),
> +	  "i" (BIT(55))
> +	);
>   
>   	csdb();
>   	return safe_ptr;



More information about the linux-arm-kernel mailing list