[PATCH v4 1/1] mm/rmap: fix potential out-of-bounds page table access during batched unmap

Lance Yang lance.yang at linux.dev
Mon Jul 7 02:13:24 PDT 2025



On 2025/7/7 13:40, Harry Yoo wrote:
> On Tue, Jul 01, 2025 at 10:31:00PM +0800, Lance Yang wrote:
>> From: Lance Yang <lance.yang at linux.dev>
>>
>> As pointed out by David[1], the batched unmap logic in try_to_unmap_one()
>> may read past the end of a PTE table when a large folio's PTE mappings
>> are not fully contained within a single page table.
>>
>> While this scenario might be rare, an issue triggerable from userspace must
>> be fixed regardless of its likelihood. This patch fixes the out-of-bounds
>> access by refactoring the logic into a new helper, folio_unmap_pte_batch().
>>
>> The new helper correctly calculates the safe batch size by capping the scan
>> at both the VMA and PMD boundaries. To simplify the code, it also supports
>> partial batching (i.e., any number of pages from 1 up to the calculated
>> safe maximum), as there is no strong reason to special-case for fully
>> mapped folios.
>>
>> [1] https://lore.kernel.org/linux-mm/a694398c-9f03-4737-81b9-7e49c857fcbe@redhat.com
>>
>> Cc: <stable at vger.kernel.org>
>> Reported-by: David Hildenbrand <david at redhat.com>
>> Closes: https://lore.kernel.org/linux-mm/a694398c-9f03-4737-81b9-7e49c857fcbe@redhat.com
>> Fixes: 354dffd29575 ("mm: support batched unmap for lazyfree large folios during reclamation")
>> Suggested-by: Barry Song <baohua at kernel.org>
>> Acked-by: Barry Song <baohua at kernel.org>
>> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes at oracle.com>
>> Acked-by: David Hildenbrand <david at redhat.com>
>> Signed-off-by: Lance Yang <lance.yang at linux.dev>
>> ---
> 
> LGTM,
> Reviewed-by: Harry Yoo <harry.yoo at oracle.com>

Hi Harry,

Thanks for taking time to review!

> 
> With a minor comment below.
> 
>> diff --git a/mm/rmap.c b/mm/rmap.c
>> index fb63d9256f09..1320b88fab74 100644
>> --- a/mm/rmap.c
>> +++ b/mm/rmap.c
>> @@ -2206,13 +2213,16 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>>   			hugetlb_remove_rmap(folio);
>>   		} else {
>>   			folio_remove_rmap_ptes(folio, subpage, nr_pages, vma);
>> -			folio_ref_sub(folio, nr_pages - 1);
>>   		}
>>   		if (vma->vm_flags & VM_LOCKED)
>>   			mlock_drain_local();
>> -		folio_put(folio);
>> -		/* We have already batched the entire folio */
>> -		if (nr_pages > 1)
>> +		folio_put_refs(folio, nr_pages);
>> +
>> +		/*
>> +		 * If we are sure that we batched the entire folio and cleared
>> +		 * all PTEs, we can just optimize and stop right here.
>> +		 */
>> +		if (nr_pages == folio_nr_pages(folio))
>>   			goto walk_done;
> 
> Just a minor comment.
> 
> We should probably teach page_vma_mapped_walk() to skip nr_pages pages,
> or just rely on next_pte: do { ... } while (pte_none(ptep_get(pvmw->pte)))
> loop in page_vma_mapped_walk() to skip those ptes?

Good point. We handle partially-mapped folios by relying on the "next_pte"
loop to skip those ptes. The common case we expect to handle is fully-mapped
folios.

> 
> Taking different paths depending on (nr_pages == folio_nr_pages(folio))
> doesn't seem sensible.

Adding more logic to page_vma_mapped_walk() for the rare partial-folio
case seems like an over-optimization that would complicate the walker.

So, I'd prefer to keep it as is for now ;)




More information about the linux-arm-kernel mailing list