[PATCH v11 3/4] mm/tlbbatch: Introduce arch_flush_tlb_batched_pending()

Catalin Marinas catalin.marinas at arm.com
Fri Jul 21 11:24:34 PDT 2023


On Mon, Jul 17, 2023 at 09:10:03PM +0800, Yicong Yang wrote:
> From: Yicong Yang <yangyicong at hisilicon.com>
> 
> Currently we'll flush the mm in flush_tlb_batched_pending() to
> avoid race between reclaim unmaps pages by batched TLB flush
> and mprotect/munmap/etc. Other architectures like arm64 may
> only need a synchronization barrier(dsb) here rather than
> a full mm flush. So add arch_flush_tlb_batched_pending() to
> allow an arch-specific implementation here. This intends no
> functional changes on x86 since still a full mm flush for
> x86.
> 
> Signed-off-by: Yicong Yang <yangyicong at hisilicon.com>

Reviewed-by: Catalin Marinas <catalin.marinas at arm.com>



More information about the linux-riscv mailing list