[PATCH v3 4/4] arm64: support batched/deferred tlb shootdown during page reclamation
Anshuman Khandual
anshuman.khandual at arm.com
Tue Sep 20 23:53:40 PDT 2022
On 8/22/22 13:51, Yicong Yang wrote:
> +static inline void arch_tlbbatch_add_mm(struct arch_tlbflush_unmap_batch *batch,
> + struct mm_struct *mm,
> + unsigned long uaddr)
> +{
> + __flush_tlb_page_nosync(mm, uaddr);
> +}
> +
> +static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
> +{
> + dsb(ish);
> +}
Just wondering if arch_tlbbatch_add_mm() could also detect continuous mapping
TLB invalidation requests on a given mm and try to generate a range based TLB
invalidation such as flush_tlb_range().
struct arch_tlbflush_unmap_batch via task->tlb_ubc->arch can track continuous
ranges while being queued up via arch_tlbbatch_add_mm(), any range formed can
later be flushed in subsequent arch_tlbbatch_flush() ?
OR
It might not be worth the effort and complexity, in comparison to performance
improvement, TLB range flush brings in ?
More information about the linux-arm-kernel
mailing list