[PATCH 1/4] hugetlb: skip to end of PT page mapping when pte not present
Muchun Song
songmuchun at bytedance.com
Fri Jun 17 01:13:28 PDT 2022
On Thu, Jun 16, 2022 at 02:05:15PM -0700, Mike Kravetz wrote:
> HugeTLB address ranges are linearly scanned during fork, unmap and
> remap operations. If a non-present entry is encountered, the code
> currently continues to the next huge page aligned address. However,
> a non-present entry implies that the page table page for that entry
> is not present. Therefore, the linear scan can skip to the end of
> range mapped by the page table page. This can speed operations on
> large sparsely populated hugetlb mappings.
>
> Create a new routine hugetlb_mask_last_page() that will return an
> address mask. When the mask is ORed with an address, the result
> will be the address of the last huge page mapped by the associated
> page table page. Use this mask to update addresses in routines which
> linearly scan hugetlb address ranges when a non-present pte is
> encountered.
>
> hugetlb_mask_last_page is related to the implementation of
> huge_pte_offset as hugetlb_mask_last_page is called when huge_pte_offset
> returns NULL. This patch only provides a complete hugetlb_mask_last_page
> implementation when CONFIG_ARCH_WANT_GENERAL_HUGETLB is defined.
> Architectures which provide their own versions of huge_pte_offset can also
> provide their own version of hugetlb_mask_last_page.
>
> Signed-off-by: Mike Kravetz <mike.kravetz at oracle.com>
> Tested-by: Baolin Wang <baolin.wang at linux.alibaba.com>
> Reviewed-by: Baolin Wang <baolin.wang at linux.alibaba.com>
It'll be more efficient, Thanks.
Acked-by: Muchun Song <songmuchun at bytedance.com>
More information about the linux-arm-kernel
mailing list