[PATCH v3 14/25] huge_memory: Allow mappings of PUD sized pages
David Hildenbrand
david at redhat.com
Sat Dec 14 07:32:00 PST 2024
On 22.11.24 02:40, Alistair Popple wrote:
> Currently DAX folio/page reference counts are managed differently to
> normal pages. To allow these to be managed the same as normal pages
> introduce vmf_insert_folio_pud. This will map the entire PUD-sized folio
> and take references as it would for a normally mapped page.
>
> This is distinct from the current mechanism, vmf_insert_pfn_pud, which
> simply inserts a special devmap PUD entry into the page table without
> holding a reference to the page for the mapping.
>
> Signed-off-by: Alistair Popple <apopple at nvidia.com>
> ---
Hi,
The patch subject of this (and especially the next patch) is misleading.
Likely you meant to have it as:
"mm/huge_memory: add vmf_insert_folio_pud() for mapping PUD sized pages"
> for (i = 0; i < nr_pages; i++) {
> @@ -1523,6 +1531,26 @@ void folio_add_file_rmap_pmd(struct folio *folio, struct page *page,
> #endif
> }
>
> +/**
> + * folio_add_file_rmap_pud - add a PUD mapping to a page range of a folio
> + * @folio: The folio to add the mapping to
> + * @page: The first page to add
> + * @vma: The vm area in which the mapping is added
> + *
> + * The page range of the folio is defined by [page, page + HPAGE_PUD_NR)
> + *
> + * The caller needs to hold the page table lock.
> + */
> +void folio_add_file_rmap_pud(struct folio *folio, struct page *page,
> + struct vm_area_struct *vma)
> +{
> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> + __folio_add_file_rmap(folio, page, HPAGE_PUD_NR, vma, RMAP_LEVEL_PUD);
> +#else
> + WARN_ON_ONCE(true);
> +#endif
> +}
> +
> static __always_inline void __folio_remove_rmap(struct folio *folio,
> struct page *page, int nr_pages, struct vm_area_struct *vma,
> enum rmap_level level)
> @@ -1552,6 +1580,7 @@ static __always_inline void __folio_remove_rmap(struct folio *folio,
> partially_mapped = nr && atomic_read(mapped);
> break;
> case RMAP_LEVEL_PMD:
> + case RMAP_LEVEL_PUD:
> atomic_dec(&folio->_large_mapcount);
> last = atomic_add_negative(-1, &folio->_entire_mapcount);
> if (last) {
If you simply reuse that code (here and on the adding path), you will
end up effectively setting nr_pmdmapped to a very large value and
passing that into __folio_mod_stat().
There, we will adjust NR_SHMEM_PMDMAPPED/NR_FILE_PMDMAPPED, which is
wrong (it's PUD mapped ;) ).
It's probably best to split out the rmap changes from the other things
in this patch.
--
Cheers,
David / dhildenb
More information about the linux-arm-kernel
mailing list