[PATCH v2 2/2] arm64: hugetlb: enable __HAVE_ARCH_FLUSH_HUGETLB_TLB_RANGE
Kefeng Wang
wangkefeng.wang at huawei.com
Mon Jul 31 19:31:45 PDT 2023
It is better to use huge page size instead of PAGE_SIZE
for stride when flush hugepage, which reduces the loop
in __flush_tlb_range().
Let's support arch's flush_hugetlb_tlb_range(), which is
used in hugetlb_unshare_all_pmds(), move_hugetlb_page_tables()
and hugetlb_change_protection() for now.
Note, for hugepages based on contiguous bit, it has to be
invalidated individually since the contiguous PTE bit is
just a hint, the hardware may or may not take it into account.
Signed-off-by: Kefeng Wang <wangkefeng.wang at huawei.com>
---
arch/arm64/include/asm/hugetlb.h | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/arch/arm64/include/asm/hugetlb.h b/arch/arm64/include/asm/hugetlb.h
index 6a4a1ab8eb23..e5c2e3dd9cf0 100644
--- a/arch/arm64/include/asm/hugetlb.h
+++ b/arch/arm64/include/asm/hugetlb.h
@@ -60,4 +60,16 @@ extern void huge_ptep_modify_prot_commit(struct vm_area_struct *vma,
#include <asm-generic/hugetlb.h>
+#define __HAVE_ARCH_FLUSH_HUGETLB_TLB_RANGE
+static inline void flush_hugetlb_tlb_range(struct vm_area_struct *vma,
+ unsigned long start,
+ unsigned long end)
+{
+ unsigned long stride = huge_page_size(hstate_vma(vma));
+
+ if (stride != PMD_SIZE && stride != PUD_SIZE)
+ stride = PAGE_SIZE;
+ __flush_tlb_range(vma, start, end, stride, false, 0);
+}
+
#endif /* __ASM_HUGETLB_H */
--
2.41.0
More information about the linux-arm-kernel
mailing list