[PATCH v2] arm64: optimize flush tlb kernel range
Anshuman Khandual
anshuman.khandual at arm.com
Thu Sep 19 23:10:14 PDT 2024
On 9/20/24 09:25, Kefeng Wang wrote:
> Currently the kernel TLBs is flushed page by page if the target
> VA range is less than MAX_DVM_OPS * PAGE_SIZE, otherwise we'll
> brutally issue a TLBI ALL.
>
> But we could optimize it when CPU supports TLB range operations,
> convert to use __flush_tlb_range_op() like other tlb range flush
> to improve performance.
>
> Co-developed-by: Yicong Yang <yangyicong at hisilicon.com>
> Signed-off-by: Yicong Yang <yangyicong at hisilicon.com>
> Signed-off-by: Kefeng Wang <wangkefeng.wang at huawei.com>
> ---
> v2:
> - address Catalin's comments and use __flush_tlb_range_op() directly
>
> arch/arm64/include/asm/tlbflush.h | 24 +++++++++++++++++-------
> 1 file changed, 17 insertions(+), 7 deletions(-)
>
> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
> index 95fbc8c05607..42f0ec14fb2c 100644
> --- a/arch/arm64/include/asm/tlbflush.h
> +++ b/arch/arm64/include/asm/tlbflush.h
> @@ -492,19 +492,29 @@ static inline void flush_tlb_range(struct vm_area_struct *vma,
>
> static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end)
> {
> - unsigned long addr;
> + const unsigned long stride = PAGE_SIZE;
> + unsigned long pages;
> +
> + start = round_down(start, stride);
> + end = round_up(end, stride);
> + pages = (end - start) >> PAGE_SHIFT;
>
> - if ((end - start) > (MAX_DVM_OPS * PAGE_SIZE)) {
> + /*
> + * When not uses TLB range ops, we can handle up to
> + * (MAX_DVM_OPS - 1) pages;
> + * When uses TLB range ops, we can handle up to
> + * MAX_TLBI_RANGE_PAGES pages.
> + */
> + if ((!system_supports_tlb_range() &&
> + (end - start) >= (MAX_DVM_OPS * stride)) ||
> + pages > MAX_TLBI_RANGE_PAGES) {
> flush_tlb_all();
> return;
> }
Could the above conditional check for flush_tlb_all() be factored out
in a helper, which can also be used in __flush_tlb_range_nosync() ?
>
> - start = __TLBI_VADDR(start, 0);
> - end = __TLBI_VADDR(end, 0);
> -
> dsb(ishst);
> - for (addr = start; addr < end; addr += 1 << (PAGE_SHIFT - 12))
> - __tlbi(vaale1is, addr);
> + __flush_tlb_range_op(vaale1is, start, pages, stride, 0,
> + TLBI_TTL_UNKNOWN, false, lpa2_is_enabled());
> dsb(ish);
> isb();
> }
More information about the linux-arm-kernel
mailing list