[PATCH v3] riscv: Ensure only ASIDLEN is used for sfence.vma
Palmer Dabbelt
palmer at dabbelt.com
Thu Mar 31 17:06:29 PDT 2022
On Wed, 30 Mar 2022 22:59:06 PDT (-0700), alistair.francis at opensource.wdc.com wrote:
> From: Alistair Francis <alistair.francis at wdc.com>
>
> When we set the value of context.id using __new_context() we set both
> the asid and the current_version with this return statement in
> __new_context():
>
> return asid | ver;
>
> This means that when local_flush_tlb_all_asid() is called with the asid
> specified from context.id we can write the incorrect value.
>
> We get away with this as hardware ignores the extra bits, as the RISC-V
> specification states:
>
> "bits SXLEN-1:ASIDMAX of the value held in rs2 are reserved for future
> standard use. Until their use is defined by a standard extension, they
> should be zeroed by software and ignored by current implementations."
>
> but it is still a bug and worth addressing as we are incorrectly setting
> extra bits.
>
> This patch uses asid_mask when calling sfence.vma to ensure the asid is
> always the correct len (ASIDLEN). This is similar to what we do in
> arch/riscv/mm/context.c.
>
> Fixes: 3f1e782998cd ("riscv: add ASID-based tlbflushing methods")
> Signed-off-by: Alistair Francis <alistair.francis at wdc.com>
> ---
> v3:
> - Use helper function
> v2:
> - Pass in pre-masked value
>
> arch/riscv/include/asm/mmu_context.h | 2 ++
> arch/riscv/mm/context.c | 5 +++++
> arch/riscv/mm/tlbflush.c | 2 +-
> 3 files changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/arch/riscv/include/asm/mmu_context.h b/arch/riscv/include/asm/mmu_context.h
> index 7030837adc1a..94e82c9e17eb 100644
> --- a/arch/riscv/include/asm/mmu_context.h
> +++ b/arch/riscv/include/asm/mmu_context.h
> @@ -16,6 +16,8 @@
> void switch_mm(struct mm_struct *prev, struct mm_struct *next,
> struct task_struct *task);
>
> +unsigned long get_mm_asid(struct mm_struct *mm);
> +
> #define activate_mm activate_mm
> static inline void activate_mm(struct mm_struct *prev,
> struct mm_struct *next)
> diff --git a/arch/riscv/mm/context.c b/arch/riscv/mm/context.c
> index 7acbfbd14557..14aec5bacbc1 100644
> --- a/arch/riscv/mm/context.c
> +++ b/arch/riscv/mm/context.c
> @@ -302,6 +302,11 @@ static inline void flush_icache_deferred(struct mm_struct *mm, unsigned int cpu)
> #endif
> }
>
> +unsigned long get_mm_asid(struct mm_struct *mm)
> +{
> + return atomic_long_read(&mm->context.id) & asid_mask;
> +}
> +
> void switch_mm(struct mm_struct *prev, struct mm_struct *next,
> struct task_struct *task)
> {
> diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c
> index 37ed760d007c..9c89c4951bee 100644
> --- a/arch/riscv/mm/tlbflush.c
> +++ b/arch/riscv/mm/tlbflush.c
> @@ -42,7 +42,7 @@ static void __sbi_tlb_flush_range(struct mm_struct *mm, unsigned long start,
> /* check if the tlbflush needs to be sent to other CPUs */
> broadcast = cpumask_any_but(cmask, cpuid) < nr_cpu_ids;
> if (static_branch_unlikely(&use_asid_allocator)) {
> - unsigned long asid = atomic_long_read(&mm->context.id);
> + unsigned long asid = get_mm_asid(mm);
>
> if (broadcast) {
> sbi_remote_sfence_vma_asid(cmask, start, size, asid);
> --
> 2.35.1
The autobuilders are finding some failures. I think this
diff --git a/arch/riscv/mm/context.c b/arch/riscv/mm/context.c
index 6df4f22f0a3c..8a9c7bc2297a 100644
--- a/arch/riscv/mm/context.c
+++ b/arch/riscv/mm/context.c
@@ -12,6 +12,7 @@
#include <linux/slab.h>
#include <linux/spinlock.h>
#include <linux/static_key.h>
+#include <asm/atomic.h>
#include <asm/tlbflush.h>
#include <asm/cacheflush.h>
#include <asm/mmu_context.h>
should do it, I've squashed that in and put it on palmer/riscv-asidlen
so the autobuilders can find it. It's going to be too late for rc1, but
this seems fine for fixes so it's no big deal.
Feel free to send a v4 if there's anything else, but if that's enough to
fix it then no need to. If there's no v4 and nothing goes wrong, I'll
cherry-pick it past rc1 when I re-fork my fixes tree.
Thanks!
More information about the linux-riscv
mailing list