[PATCH] arm64: uaccess: avoid blocking within critical sections
Mark Rutland
mark.rutland at arm.com
Mon Nov 22 04:58:20 PST 2021
As Vincent reports in:
https://lore.kernel.org/r/20211118163417.21617-1-vincent.whitchurch@axis.com
The put_user() in schedule_tail() can get stuck in a livelock, similar
to a problem recently fixed on riscv in commit:
285a76bb2cf51b0c ("riscv: evaluate put_user() arg before enabling user access")
In __raw_put_user() we have a critical section between
uaccess_ttbr0_enable() and uaccess_ttbr0_disable() where we cannot
safely call into the scheduler without having taken an exception, as
schedule() and other scheduling functions will not save/restore the
TTBR0 state. If either of the `x` or `ptr` arguments to __raw_put_user()
contain a blocking call, we may call into the scheduler within the
critical section. This can result in two problems:
1) The access within the critical section will occur without the
required TTBR0 tables installed. This will fault, and where the
required tables permit access, the access will be retried without the
required tables, resulting in a livelock.
2) When TTBR0 SW PAN is in use, check_and_switch_context() does not
modify TTBR0, leaving a stale value installed. The mappings of the
blocked task will erroneously be accessible to regular accesses in
the context of the new task. Additionally, if the tables are
subsequently freed, local TLB maintenance required to reuse the ASID
may be lost, potentially resulting in TLB corruption (e.g. in the
presence of CnP).
The same issue exists for __raw_get_user() in the critical section
between uaccess_ttbr0_enable() and uaccess_ttbr0_disable().
A similar issue exists for __get_kernel_nofault() and
__put_kernel_nofault() for the critical section between
__uaccess_enable_tco_async() and __uaccess_disable_tco_async(), as the
TCO state is not context-switched by direct calls into the scheduler.
Here the TCO state may be lost from the context of the current task,
resulting in unexpected asynchronous tag check faults. It may also be
leaked to another task, suppressing expected tag check faults.
To fix all of these cases, we must ensure that we do not directly call
into the scheduler in their respective critical sections. This patch
reworks __raw_put_user(), __raw_get_user(), __get_kernel_nofault(), and
__put_kernel_nofault(), ensuring that parameters are evaluated outside
of the critical sections. To make this requirement clear, comments are
added describing the problem, and line spaces added to separate the
critical sections from other portions of the macros.
For __raw_get_user() and __raw_put_user() the `err` parameter is
conditionally assigned to, and we must currently evaluate this in the
critical section. This behaviour is relied upon by the signal code,
which uses chains of put_user_error() and get_user_error(), checking the
return value at the end. In all cases, the `err` parameter is a plain
int rather than a more complex expression with a blocking call, so this
is safe.
In future we should try to clean up the `err` usage to remove the
potential for this to be a problem.
Aside from the changes to time of evaluation, there should be no
functional change as a result of this patch.
Reported-by: Vincent Whitchurch <vincent.whitchurch at axis.com>
Link: https://lore.kernel.org/r/20211118163417.21617-1-vincent.whitchurch@axis.com
Fixes: f253d827f33cb5a5990 ("arm64: uaccess: refactor __{get,put}_user")
Signed-off-by: Mark Rutland <mark.rutland at arm.com>
Cc: Will Deacon <will at kernel.org>
Cc: Catalin Marinas <catalin.marinas at arm.com>
---
arch/arm64/include/asm/uaccess.h | 48 ++++++++++++++++++++++++++++++++++------
1 file changed, 41 insertions(+), 7 deletions(-)
This supersedes Vincent's patch, and my replies:
https://lore.kernel.org/r/20211118163417.21617-1-vincent.whitchurch@axis.com
https://lore.kernel.org/r/20211119130419.GA25912@lakrids.cambridge.arm.com
https://lore.kernel.org/r/YZtqluBcA1mjzb17@FVFF77S0Q05N
Other than the case of `err` as described in the commit message, I've tried to
ensure this covers all arguments evaluated within the critical sections, as
Will suggested in:
https://lore.kernel.org/r/20211119113306.GA13292@willie-the-truck
This has seen some light build+boot testing so far.
Thanks,
Mark.
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index 6e2e0b7031ab..3a5ff5e20586 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -281,12 +281,22 @@ do { \
(x) = (__force __typeof__(*(ptr)))__gu_val; \
} while (0)
+/*
+ * We must not call into the scheduler between uaccess_ttbr0_enable() and
+ * uaccess_ttbr0_disable(). As `x` and `ptr` could contain blocking functions,
+ * we must evaluate these outside of the critical section.
+ */
#define __raw_get_user(x, ptr, err) \
do { \
+ __typeof__(*(ptr)) __user *__rgu_ptr = (ptr); \
+ __typeof__(x) __rgu_val; \
__chk_user_ptr(ptr); \
+ \
uaccess_ttbr0_enable(); \
- __raw_get_mem("ldtr", x, ptr, err); \
+ __raw_get_mem("ldtr", __rgu_val, __rgu_ptr, err); \
uaccess_ttbr0_disable(); \
+ \
+ (x) = __rgu_val; \
} while (0)
#define __get_user_error(x, ptr, err) \
@@ -310,14 +320,22 @@ do { \
#define get_user __get_user
+/*
+ * We must not call into the scheduler between __uaccess_enable_tco_async() and
+ * __uaccess_disable_tco_async(). As `dst` and `src` may contain blocking
+ * functions, we must evaluate these outside of the critical section.
+ */
#define __get_kernel_nofault(dst, src, type, err_label) \
do { \
+ __typeof__(dst) __gkn_dst = (dst); \
+ __typeof__(src) __gkn_src = (src); \
int __gkn_err = 0; \
\
__uaccess_enable_tco_async(); \
- __raw_get_mem("ldr", *((type *)(dst)), \
- (__force type *)(src), __gkn_err); \
+ __raw_get_mem("ldr", *((type *)(__gkn_dst)), \
+ (__force type *)(__gkn_src), __gkn_err); \
__uaccess_disable_tco_async(); \
+ \
if (unlikely(__gkn_err)) \
goto err_label; \
} while (0)
@@ -351,11 +369,19 @@ do { \
} \
} while (0)
+/*
+ * We must not call into the scheduler between uaccess_ttbr0_enable() and
+ * uaccess_ttbr0_disable(). As `x` and `ptr` could contain blocking functions,
+ * we must evaluate these outside of the critical section.
+ */
#define __raw_put_user(x, ptr, err) \
do { \
- __chk_user_ptr(ptr); \
+ __typeof__(*(ptr)) __user *__rpu_ptr = (ptr); \
+ __typeof__(*(ptr)) __rpu_val = (x); \
+ __chk_user_ptr(__rpu_ptr); \
+ \
uaccess_ttbr0_enable(); \
- __raw_put_mem("sttr", x, ptr, err); \
+ __raw_put_mem("sttr", __rpu_val, __rpu_ptr, err); \
uaccess_ttbr0_disable(); \
} while (0)
@@ -380,14 +406,22 @@ do { \
#define put_user __put_user
+/*
+ * We must not call into the scheduler between __uaccess_enable_tco_async() and
+ * __uaccess_disable_tco_async(). As `dst` and `src` may contain blocking
+ * functions, we must evaluate these outside of the critical section.
+ */
#define __put_kernel_nofault(dst, src, type, err_label) \
do { \
+ __typeof__(dst) __pkn_dst = (dst); \
+ __typeof__(src) __pkn_src = (src); \
int __pkn_err = 0; \
\
__uaccess_enable_tco_async(); \
- __raw_put_mem("str", *((type *)(src)), \
- (__force type *)(dst), __pkn_err); \
+ __raw_put_mem("str", *((type *)(__pkn_src)), \
+ (__force type *)(__pkn_dst), __pkn_err); \
__uaccess_disable_tco_async(); \
+ \
if (unlikely(__pkn_err)) \
goto err_label; \
} while(0)
--
2.11.0
More information about the linux-arm-kernel
mailing list