[RFC PATCH v1 1/4] mm: Introduce ptep_get_lockless_norecency()

David Hildenbrand david at redhat.com
Wed Mar 27 10:02:29 PDT 2024


>>>>
>>>>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>>>>> index 68283e54c899..41dc44eb8454 100644
>>>>> --- a/mm/hugetlb.c
>>>>> +++ b/mm/hugetlb.c
>>>>> @@ -7517,7 +7517,7 @@ pte_t *huge_pte_alloc(struct mm_struct *mm, struct
>>>>> vm_area_struct *vma,
>>>>>         }
>>>>>
>>>>>         if (pte) {
>>>>> -        pte_t pteval = ptep_get_lockless(pte);
>>>>> +        pte_t pteval = ptep_get_lockless_norecency(pte);
>>>>>
>>>>>             BUG_ON(pte_present(pteval) && !pte_huge(pteval));
>>>>>         }
>>>>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>>>>> index 2771fc043b3b..1a6c9ed8237a 100644
>>>>> --- a/mm/khugepaged.c
>>>>> +++ b/mm/khugepaged.c
>>>>> @@ -1019,7 +1019,7 @@ static int __collapse_huge_page_swapin(struct mm_struct
>>>>> *mm,
>>>>>                 }
>>>>>             }
>>>>>
>>>>> -        vmf.orig_pte = ptep_get_lockless(pte);
>>>>> +        vmf.orig_pte = ptep_get_lockless_norecency(pte);
>>>>>             if (!is_swap_pte(vmf.orig_pte))
>>>>>                 continue;
>>>>
>>>>
>>>> Hm, I think you mentioned that we want to be careful with vmf.orig_pte.
>>>
>>> Yeah good point. So I guess this should move to patch 3 (which may be dropped -
>>> tbd)?
>>>
>>
>> Yes. Or a separate one where you explain in detail why do_swap_page() can handle
>> it just fine.
> 
> Ahh no wait - I remember now; the reason I believe this is a "trivial" case is
> because we only leak vmf.orig_pte to the rest of the world if its a swap entry.

Ugh, yes!

-- 
Cheers,

David / dhildenb




More information about the linux-arm-kernel mailing list