[PATCH v8 04/10] mm: thp: Support allocation of anonymous multi-size THP

David Hildenbrand david at redhat.com
Tue Dec 5 08:32:30 PST 2023


On 04.12.23 11:20, Ryan Roberts wrote:
> Introduce the logic to allow THP to be configured (through the new sysfs
> interface we just added) to allocate large folios to back anonymous
> memory, which are larger than the base page size but smaller than
> PMD-size. We call this new THP extension "multi-size THP" (mTHP).
> 
> mTHP continues to be PTE-mapped, but in many cases can still provide
> similar benefits to traditional PMD-sized THP: Page faults are
> significantly reduced (by a factor of e.g. 4, 8, 16, etc. depending on
> the configured order), but latency spikes are much less prominent
> because the size of each page isn't as huge as the PMD-sized variant and
> there is less memory to clear in each page fault. The number of per-page
> operations (e.g. ref counting, rmap management, lru list management) are
> also significantly reduced since those ops now become per-folio.
> 
> Some architectures also employ TLB compression mechanisms to squeeze
> more entries in when a set of PTEs are virtually and physically
> contiguous and approporiately aligned. In this case, TLB misses will
> occur less often.
> 
> The new behaviour is disabled by default, but can be enabled at runtime
> by writing to /sys/kernel/mm/transparent_hugepage/hugepage-XXkb/enabled
> (see documentation in previous commit). The long term aim is to change
> the default to include suitable lower orders, but there are some risks
> around internal fragmentation that need to be better understood first.
> 
> Signed-off-by: Ryan Roberts <ryan.roberts at arm.com>

In general, looks good to me, some comments/nits. And the usual "let's 
make sure we don't degrade order-0 and keep that as fast as possible" 
comment.

> ---
>   include/linux/huge_mm.h |   6 ++-
>   mm/memory.c             | 106 ++++++++++++++++++++++++++++++++++++----
>   2 files changed, 101 insertions(+), 11 deletions(-)
> 
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index bd0eadd3befb..91a53b9835a4 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -68,9 +68,11 @@ extern struct kobj_attribute shmem_enabled_attr;
>   #define HPAGE_PMD_NR (1<<HPAGE_PMD_ORDER)
>   
>   /*
> - * Mask of all large folio orders supported for anonymous THP.
> + * Mask of all large folio orders supported for anonymous THP; all orders up to
> + * and including PMD_ORDER, except order-0 (which is not "huge") and order-1
> + * (which is a limitation of the THP implementation).
>    */
> -#define THP_ORDERS_ALL_ANON	BIT(PMD_ORDER)
> +#define THP_ORDERS_ALL_ANON	((BIT(PMD_ORDER + 1) - 1) & ~(BIT(0) | BIT(1)))
>   
>   /*
>    * Mask of all large folio orders supported for file THP.
> diff --git a/mm/memory.c b/mm/memory.c
> index 3ceeb0f45bf5..bf7e93813018 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -4125,6 +4125,84 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>   	return ret;
>   }
>   
> +static bool pte_range_none(pte_t *pte, int nr_pages)
> +{
> +	int i;
> +
> +	for (i = 0; i < nr_pages; i++) {
> +		if (!pte_none(ptep_get_lockless(pte + i)))
> +			return false;
> +	}
> +
> +	return true;
> +}
> +
> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> +static struct folio *alloc_anon_folio(struct vm_fault *vmf)
> +{
> +	gfp_t gfp;
> +	pte_t *pte;
> +	unsigned long addr;
> +	struct folio *folio;
> +	struct vm_area_struct *vma = vmf->vma;
> +	unsigned long orders;
> +	int order;

Nit: reverse christmas tree encouraged ;)

> +
> +	/*
> +	 * If uffd is active for the vma we need per-page fault fidelity to
> +	 * maintain the uffd semantics.
> +	 */
> +	if (userfaultfd_armed(vma))

Nit: unlikely()

> +		goto fallback;
> +
> +	/*
> +	 * Get a list of all the (large) orders below PMD_ORDER that are enabled
> +	 * for this vma. Then filter out the orders that can't be allocated over
> +	 * the faulting address and still be fully contained in the vma.
> +	 */
> +	orders = thp_vma_allowable_orders(vma, vma->vm_flags, false, true, true,
> +					  BIT(PMD_ORDER) - 1);
> +	orders = thp_vma_suitable_orders(vma, vmf->address, orders);

Comment: Both will eventually loop over all orders, correct? Could 
eventually be sped up in the future.

Nit: the orders = ... order = ... looks like this might deserve a helper 
function that makes this easier to read.

Nit: Why call thp_vma_suitable_orders if the orders are already 0? 
Again, some helper might be reasonable where that is handled internally.

Comment: For order-0 we'll always perform a function call to both 
thp_vma_allowable_orders() / thp_vma_suitable_orders(). We should 
perform some fast and efficient check if any <PMD THP are even enabled 
in the system / for this VMA, and in that case just fallback before 
doing more expensive checks.

> +
> +	if (!orders)
> +		goto fallback;
> +
> +	pte = pte_offset_map(vmf->pmd, vmf->address & PMD_MASK);
> +	if (!pte)
> +		return ERR_PTR(-EAGAIN);
> +
> +	order = first_order(orders);
> +	while (orders) {
> +		addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order);
> +		vmf->pte = pte + pte_index(addr);
> +		if (pte_range_none(vmf->pte, 1 << order))
> +			break;

Comment: Likely it would make sense to scan only once and determine the 
"largest none range" around that address, having the largest suitable 
order in mind.

> +		order = next_order(&orders, order);
> +	}
> +
> +	vmf->pte = NULL;

Nit: Can you elaborate why you are messing with vmf->pte here? A simple 
helper variable will make this code look less magical. Unless I am 
missing something important :)

> +	pte_unmap(pte);
> +
> +	gfp = vma_thp_gfp_mask(vma);
> +
> +	while (orders) {
> +		addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order);
> +		folio = vma_alloc_folio(gfp, order, vma, addr, true);
> +		if (folio) {
> +			clear_huge_page(&folio->page, addr, 1 << order);
> +			return folio;
> +		}
> +		order = next_order(&orders, order);
> +	}
> +

Queestion: would it make sense to combine both loops? I suspect memory 
allocations with pte_offset_map()/kmao are problematic.

> +fallback:
> +	return vma_alloc_zeroed_movable_folio(vma, vmf->address);
> +}
> +#else
> +#define alloc_anon_folio(vmf) \
> +		vma_alloc_zeroed_movable_folio((vmf)->vma, (vmf)->address)
> +#endif
> +
>   /*
>    * We enter with non-exclusive mmap_lock (to exclude vma changes,
>    * but allow concurrent faults), and pte mapped but not yet locked.
> @@ -4132,6 +4210,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>    */
>   static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
>   {
> +	int i;
> +	int nr_pages = 1;
> +	unsigned long addr = vmf->address;
>   	bool uffd_wp = vmf_orig_pte_uffd_wp(vmf);
>   	struct vm_area_struct *vma = vmf->vma;
>   	struct folio *folio;

Nit: reverse christmas tree :)

> @@ -4176,10 +4257,15 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
>   	/* Allocate our own private page. */
>   	if (unlikely(anon_vma_prepare(vma)))
>   		goto oom;
> -	folio = vma_alloc_zeroed_movable_folio(vma, vmf->address);
> +	folio = alloc_anon_folio(vmf);
> +	if (IS_ERR(folio))
> +		return 0;
>   	if (!folio)
>   		goto oom;
>   
> +	nr_pages = folio_nr_pages(folio);
> +	addr = ALIGN_DOWN(vmf->address, nr_pages * PAGE_SIZE);
> +
>   	if (mem_cgroup_charge(folio, vma->vm_mm, GFP_KERNEL))
>   		goto oom_free_page;
>   	folio_throttle_swaprate(folio, GFP_KERNEL);
> @@ -4196,12 +4282,13 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
>   	if (vma->vm_flags & VM_WRITE)
>   		entry = pte_mkwrite(pte_mkdirty(entry), vma);
>   
> -	vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address,
> -			&vmf->ptl);
> +	vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, addr, &vmf->ptl);
>   	if (!vmf->pte)
>   		goto release;
> -	if (vmf_pte_changed(vmf)) {
> -		update_mmu_tlb(vma, vmf->address, vmf->pte);
> +	if ((nr_pages == 1 && vmf_pte_changed(vmf)) ||
> +	    (nr_pages  > 1 && !pte_range_none(vmf->pte, nr_pages))) {
> +		for (i = 0; i < nr_pages; i++)
> +			update_mmu_tlb(vma, addr + PAGE_SIZE * i, vmf->pte + i);

Comment: separating the order-0 case from the other case might make this 
easier to read.

>   		goto release;
>   	}
>   
> @@ -4216,16 +4303,17 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
>   		return handle_userfault(vmf, VM_UFFD_MISSING);
>   	}
>   
> -	inc_mm_counter(vma->vm_mm, MM_ANONPAGES);
> -	folio_add_new_anon_rmap(folio, vma, vmf->address);
> +	folio_ref_add(folio, nr_pages - 1);
> +	add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr_pages);
> +	folio_add_new_anon_rmap(folio, vma, addr);
>   	folio_add_lru_vma(folio, vma);
>   setpte:
>   	if (uffd_wp)
>   		entry = pte_mkuffd_wp(entry);
> -	set_pte_at(vma->vm_mm, vmf->address, vmf->pte, entry);
> +	set_ptes(vma->vm_mm, addr, vmf->pte, entry, nr_pages);
>   
>   	/* No need to invalidate - it was non-present before */
> -	update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1);
> +	update_mmu_cache_range(vmf, vma, addr, vmf->pte, nr_pages);
>   unlock:
>   	if (vmf->pte)
>   		pte_unmap_unlock(vmf->pte, vmf->ptl);

Benchmarking order-0 allocations might be interesting. There will be 
some added checks + multiple loops/conditionals for order-0 that could 
be avoided by having two separate code paths. If we can't measure a 
difference, all good.

-- 
Cheers,

David / dhildenb




More information about the linux-arm-kernel mailing list