[PATCH 1/3] arm64/fpsimd: Ensure SME storage is allocated after SVE VL changes
David Spickett
David.Spickett at arm.com
Mon Jul 17 04:19:15 PDT 2023
I've confirmed on QEMU and Arm's FVP that this fixes the issue I was seeing.
From: Mark Brown <broonie at kernel.org>
Sent: 13 July 2023 21:06
To: Catalin Marinas <Catalin.Marinas at arm.com>; Will Deacon <will at kernel.org>; Shuah Khan <shuah at kernel.org>
Cc: David Spickett <David.Spickett at arm.com>; linux-arm-kernel at lists.infradead.org <linux-arm-kernel at lists.infradead.org>; linux-kernel at vger.kernel.org <linux-kernel at vger.kernel.org>; linux-kselftest at vger.kernel.org <linux-kselftest at vger.kernel.org>; Mark Brown <broonie at kernel.org>; stable at vger.kernel.org <stable at vger.kernel.org>
Subject: [PATCH 1/3] arm64/fpsimd: Ensure SME storage is allocated after SVE VL changes
When we reconfigure the SVE vector length we discard the backing storage
for the SVE vectors and then reallocate on next SVE use, leaving the SME
specific state alone. This means that we do not enable SME traps if they
were already disabled. That means that userspace code can enter streaming
mode without trapping, putting the task in a state where if we try to save
the state of the task we will fault.
Since the ABI does not specify that changing the SVE vector length disturbs
SME state, and since SVE code may not be aware of SME code in the process,
we shouldn't simply discard any ZA state. Instead immediately reallocate
the storage for SVE if SME is active, and disable SME if we change the SVE
vector length while there is no SME state active.
Disabling SME traps on SVE vector length changes would make the overall
code more complex since we would have a state where we have valid SME state
stored but might get a SME trap.
Fixes: 9e4ab6c89109 ("arm64/sme: Implement vector length configuration prctl()s")
Reported-by: David Spickett <David.Spickett at arm.com>
Signed-off-by: Mark Brown <broonie at kernel.org>
Cc: stable at vger.kernel.org
---
arch/arm64/kernel/fpsimd.c | 32 +++++++++++++++++++++++++-------
1 file changed, 25 insertions(+), 7 deletions(-)
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index 7a1aeb95d7c3..a527b95c06e7 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -847,6 +847,9 @@ void sve_sync_from_fpsimd_zeropad(struct task_struct *task)
int vec_set_vector_length(struct task_struct *task, enum vec_type type,
unsigned long vl, unsigned long flags)
{
+ bool free_sme = false;
+ bool alloc_sve = true;
+
if (flags & ~(unsigned long)(PR_SVE_VL_INHERIT |
PR_SVE_SET_VL_ONEXEC))
return -EINVAL;
@@ -897,22 +900,37 @@ int vec_set_vector_length(struct task_struct *task, enum vec_type type,
task->thread.fp_type = FP_STATE_FPSIMD;
}
- if (system_supports_sme() && type == ARM64_VEC_SME) {
- task->thread.svcr &= ~(SVCR_SM_MASK |
- SVCR_ZA_MASK);
- clear_thread_flag(TIF_SME);
+ if (system_supports_sme()) {
+ if (type == ARM64_VEC_SME ||
+ !(task->thread.svcr & (SVCR_SM_MASK | SVCR_ZA_MASK))) {
+ /*
+ * We are changing the SME VL or weren't using
+ * SME anyway, discard the state and force a
+ * reallocation.
+ */
+ task->thread.svcr &= ~(SVCR_SM_MASK |
+ SVCR_ZA_MASK);
+ clear_thread_flag(TIF_SME);
+ free_sme = true;
+ } else {
+ alloc_sve = true;
+ }
}
if (task == current)
put_cpu_fpsimd_context();
/*
- * Force reallocation of task SVE and SME state to the correct
- * size on next use:
+ * Free the changed states if they are not in use, they will
+ * be reallocated to the correct size on next use. If we need
+ * SVE state due to having untouched SME state then reallocate
+ * it immediately.
*/
sve_free(task);
- if (system_supports_sme() && type == ARM64_VEC_SME)
+ if (free_sme)
sme_free(task);
+ if (alloc_sve)
+ sve_alloc(task, true);
task_set_vl(task, type, vl);
--
2.30.2
More information about the linux-arm-kernel
mailing list