[PATCH v2 1/5] mm: Non-pmd-mappable, large folios for folio_add_new_anon_rmap()
Ryan Roberts
ryan.roberts at arm.com
Tue Jul 4 04:19:50 PDT 2023
On 04/07/2023 03:13, Yin, Fengwei wrote:
>
>
> On 7/4/2023 3:05 AM, Yu Zhao wrote:
>> On Mon, Jul 3, 2023 at 7:53 AM Ryan Roberts <ryan.roberts at arm.com> wrote:
>>>
>>> In preparation for FLEXIBLE_THP support, improve
>>> folio_add_new_anon_rmap() to allow a non-pmd-mappable, large folio to be
>>> passed to it. In this case, all contained pages are accounted using the
>>> "small" pages scheme.
>>
>> Nit: In this case, all *subpages* are accounted using the *order-0
>> folio* (or base page) scheme.
> Matthew suggested not to use subpage with folio. Using page with folio:
> https://lore.kernel.org/linux-mm/Y9qiS%2FIxZOMx62t6@casper.infradead.org/
OK, I'll change this to "In this case, all contained pages are accounted using
the *order-0 folio* (or base page) scheme."
>
>>
>>> Signed-off-by: Ryan Roberts <ryan.roberts at arm.com>
>>
>> Reviewed-by: Yu Zhao <yuzhao at google.com>
Thanks!
>>
>>> mm/rmap.c | 26 +++++++++++++++++++-------
>>> 1 file changed, 19 insertions(+), 7 deletions(-)
>>>
>>> diff --git a/mm/rmap.c b/mm/rmap.c
>>> index 1d8369549424..82ef5ba363d1 100644
>>> --- a/mm/rmap.c
>>> +++ b/mm/rmap.c
>>> @@ -1278,31 +1278,43 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma,
>>> * This means the inc-and-test can be bypassed.
>>> * The folio does not have to be locked.
>>> *
>>> - * If the folio is large, it is accounted as a THP. As the folio
>>> + * If the folio is pmd-mappable, it is accounted as a THP. As the folio
>>> * is new, it's assumed to be mapped exclusively by a single process.
>>> */
>>> void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
>>> unsigned long address)
>>> {
>>> - int nr;
>>> + int nr = folio_nr_pages(folio);
>>> + int i;
>>> + struct page *page;
>>>
>>> - VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma);
>>> + VM_BUG_ON_VMA(address < vma->vm_start ||
>>> + address + (nr << PAGE_SHIFT) > vma->vm_end, vma);
>>> __folio_set_swapbacked(folio);
>>>
>>> - if (likely(!folio_test_pmd_mappable(folio))) {
>>> + if (!folio_test_large(folio)) {
>>> /* increment count (starts at -1) */
>>> atomic_set(&folio->_mapcount, 0);
>>> - nr = 1;
>>> + __page_set_anon_rmap(folio, &folio->page, vma, address, 1);
>>> + } else if (!folio_test_pmd_mappable(folio)) {
>>> + /* increment count (starts at 0) */
>>> + atomic_set(&folio->_nr_pages_mapped, nr);
>>> +
>>> + page = &folio->page;
>>> + for (i = 0; i < nr; i++, page++, address += PAGE_SIZE) {
>>> + /* increment count (starts at -1) */
>>> + atomic_set(&page->_mapcount, 0);
>>> + __page_set_anon_rmap(folio, page, vma, address, 1);
>>> + }
>>
>> Nit: use folio_page(), e.g.,
Yep, will change for v3.
>>
>> } else if (!folio_test_pmd_mappable(folio)) {
>> int i;
>>
>> for (i = 0; i < nr; i++) {
>> struct page *page = folio_page(folio, i);
>>
>> /* increment count (starts at -1) */
>> atomic_set(&page->_mapcount, 0);
>> __page_set_anon_rmap(folio, page, vma, address + PAGE_SIZE * i, 1);
>> }
>> /* increment count (starts at 0) */
>> atomic_set(&folio->_nr_pages_mapped, nr);
>> } else {
>>
>>> } else {
>>> /* increment count (starts at -1) */
>>> atomic_set(&folio->_entire_mapcount, 0);
>>> atomic_set(&folio->_nr_pages_mapped, COMPOUND_MAPPED);
>>> - nr = folio_nr_pages(folio);
>>> __lruvec_stat_mod_folio(folio, NR_ANON_THPS, nr);
>>> + __page_set_anon_rmap(folio, &folio->page, vma, address, 1);
>>> }
>>>
>>> __lruvec_stat_mod_folio(folio, NR_ANON_MAPPED, nr);
>>> - __page_set_anon_rmap(folio, &folio->page, vma, address, 1);
>>> }
More information about the linux-arm-kernel
mailing list