[PATCH v3 1/2] mm: Abstract THP allocation
David Hildenbrand
david at redhat.com
Wed Sep 11 05:35:25 PDT 2024
[...]
>>
>>> +
>>> +static void map_pmd_thp(struct folio *folio, struct vm_fault *vmf,
>>> + struct vm_area_struct *vma, unsigned long haddr)
>>> +{
>>> + pmd_t entry;
>>> +
>>> + entry = mk_huge_pmd(&folio->page, vma->vm_page_prot);
>>> + entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
>>> + folio_add_new_anon_rmap(folio, vma, haddr, RMAP_EXCLUSIVE);
>>> + folio_add_lru_vma(folio, vma);
>>> + set_pmd_at(vma->vm_mm, haddr, vmf->pmd, entry);
>>> + update_mmu_cache_pmd(vma, vmf->address, vmf->pmd);
>>
>> It's quite weird to see a mixture of haddr and vmf->address, and
>> likely this mixture is wrong or not not required.
>>
>> Looking at arc's update_mmu_cache_pmd() implementation, I cannot see
>> how passing in the unaligned address would do the right thing. But
>> maybe arc also doesn't trigger that code path ... who knows :)
>>
>>
>> Staring at some other update_mmu_cache_pmd() users, it's quite
>> inconsistent. Primarily only do_huge_pmd_numa_page() and
>> __do_huge_pmd_anonymous_page() use the unaligned address. The others
>> seem to use the aligned address ... as one would expect when modifying
>> a PMD.
>>
>>
>> I suggest to change this function to *not* pass in the vmf, and rename
>> it to something like:
>>
>> static void folio_map_anon_pmd(struct folio *folio, struct
>> vm_area_struct *vma, pmd_t *pmd, unsigned long haddr)
>>
>> Then use haddr also to do the update_mmu_cache_pmd().
>
> The code I changed, already was passing vmf->address to
> update_mmu_cache_pmd().
> I did not change any of the haddr and vmf->address semantics, so really
> can't comment
> on this.
I'm not saying that you changed it. I say, "we touch it we fix it" :).
We could do that in a separate cleanup patch upfront.
> I agree with the name change; vmf will be required for
> set_pmd_at(vmf->pmd), so I should
> just pass pmd?
>>
>>> + add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR);
>>> + mm_inc_nr_ptes(vma->vm_mm);
>>> +}
>>> +
>>> +static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf)
>>> +{
>>> + struct vm_area_struct *vma = vmf->vma;
>>> + struct folio *folio;
>>> + pgtable_t pgtable;
>>> + unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
>>> + vm_fault_t ret = 0;
>>> + gfp_t gfp = vma_thp_gfp_mask(vma);
>>
>> Nit: While at it, try to use reverse christmas-tree where possible,
>> makes things more reasible. You could make haddr const.
>>
>> struct vm_area_struct *vma = vmf->vma;
>> unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
>> gfp_t gfp = vma_thp_gfp_mask(vma);
>> struct folio *folio;
>> vm_fault_t ret = 0;
>
> Okay.
>> ...
>>
>>> +
>>> + folio = pmd_thp_fault_alloc(gfp, vma, haddr, vmf->address);
>>> + if (unlikely(!folio)) {
>>> + ret = VM_FAULT_FALLBACK;
>>> + goto release;
>>> + }
>>> +
>>> + pgtable = pte_alloc_one(vma->vm_mm);
>>> + if (unlikely(!pgtable)) {
>>> + ret = VM_FAULT_OOM;
>>> + goto release;
>>> + }
>>> vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd);
>>> +
>>
>> Nit Unrelated change.
>
> Are you asking me to align this line with the below line?
No, just pointing out that you add a newline in a code section you don't
really touch. Sometimes that's deliberate, sometimes not (e.g., going
back and forth while reworking the code and accidentally leaving that
in). We try to avoid such unrelated changes because it adds noise to the
patches.
If it was deliberate, feel free to leave it in.
--
Cheers,
David / dhildenb
More information about the linux-arm-kernel
mailing list