[RFC PATCH v1 0/4] riscv: mm: Defer tlb flush to context_switch

Guo Ren guoren at kernel.org
Sun Nov 2 19:44:25 PST 2025


On Thu, Oct 30, 2025 at 9:57 PM Xu Lu <luxu.kernel at bytedance.com> wrote:
>
> When need to flush tlb of a remote cpu, there is no need to send an IPI
> if the target cpu is not using the asid we want to flush. Instead, we
> can cache the tlb flush info in percpu buffer, and defer the tlb flush
> to the next context_switch.
>
> This reduces the number of IPI due to tlb flush:
>
> * ltp - mmapstress01
> Before: ~108k
> After: ~46k
Great result!

I've some questions:
1. Do we need an accurate address flush by a new queue of
flush_tlb_range_data? Why not flush the whole asid?
2. If we reuse the context_tlb_flush_pending mechanism, could
mmapstress01 gain the result better than ~46k?
3. If we meet the kernel address space, we must use IPI flush
immediately, but I didn't see your patch consider that case, or am I
wrong?

>
> Future plan in the next version:
>
> - This patch series reduces IPI by deferring tlb flush to
> context_switch. It does not clear the mm_cpumask of target mm_struct. In
> the next version, I will apply a threshold to the number of ASIDs
> maintained by each cpu's tlb. Once the threshold is exceeded, ASID that
> has not been used for the longest time will be flushed out. And current
> cpu will be cleared in the mm_cpumask.
>
> Thanks in advance for your comments.
>
> Xu Lu (4):
>   riscv: mm: Introduce percpu loaded_asid
>   riscv: mm: Introduce percpu tlb flush queue
>   riscv: mm: Enqueue tlbflush info if task is not running on target cpu
>   riscv: mm: Perform tlb flush during context_switch
>
>  arch/riscv/include/asm/mmu_context.h |  1 +
>  arch/riscv/include/asm/tlbflush.h    |  4 ++
>  arch/riscv/mm/context.c              | 10 ++++
>  arch/riscv/mm/tlbflush.c             | 76 +++++++++++++++++++++++++++-
>  4 files changed, 90 insertions(+), 1 deletion(-)
>
> --
> 2.20.1
>


-- 
Best Regards
 Guo Ren



More information about the linux-riscv mailing list