[PATCH] arm64/mm: Add memory barrier for mm_cid
levi.yun
yeoreum.yun at arm.com
Tue Mar 5 06:53:35 PST 2024
Currently arm64's switch_mm() doesn't always have an smp_mb()
which the core scheduler code has depended upon since commit:
commit 223baf9d17f25 ("sched: Fix performance regression introduced by mm_cid")
If switch_mm() doesn't call smp_mb(), sched_mm_cid_remote_clear()
can unset the activly used cid when it fails to observe active task after it
sets lazy_put.
By adding an smp_mb() in arm64's check_and_switch_context(),
Guarantee to observe active task after sched_mm_cid_remote_clear()
success to set lazy_put.
Signed-off-by: levi.yun <yeoreum.yun at arm.com>
Fixes: 223baf9d17f2 ("sched: Fix performance regression introduced by mm_cid")
Cc: <stable at vger.kernel.org> # 6.4.x
Cc: Mathieu Desnoyers <mathieu.desnoyers at efficios.com>
Cc: Catalin Marinas <catalin.marinas at arm.com>
Cc: Mark Rutland <mark.rutland at arm.com>
Cc: Will Deacon <will at kernel.org>
Cc: Peter Zijlstra <peterz at infradead.org>
Cc: Aaron Lu <aaron.lu at intel.com>
---
I'm really sorry if you got this multiple times.
I had some problems with the SMTP server...
arch/arm64/mm/context.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
index 188197590fc9..7a9e8e6647a0 100644
--- a/arch/arm64/mm/context.c
+++ b/arch/arm64/mm/context.c
@@ -268,6 +268,11 @@ void check_and_switch_context(struct mm_struct *mm)
*/
if (!system_uses_ttbr0_pan())
cpu_switch_mm(mm->pgd, mm);
+
+ /*
+ * See the comments on switch_mm_cid describing user -> user transition.
+ */
+ smp_mb();
}
unsigned long arm64_mm_context_get(struct mm_struct *mm)
--
LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}
More information about the linux-arm-kernel
mailing list