[PATCH v1 01/11] arm/pgtable: define PFN_PTE_SHIFT on arm and arm64

Ryan Roberts ryan.roberts at arm.com
Tue Jan 23 03:38:30 PST 2024


On 23/01/2024 11:31, David Hildenbrand wrote:
>>>
>>>> If high bits are used for
>>>> something else, then we might produce a garbage PTE on overflow, but that
>>>> shouldn't really matter I concluded for folio_pte_batch() purposes, we'd not
>>>> detect "belongs to this folio batch" either way.
>>>
>>> Exactly.
>>>
>>>>
>>>> Maybe it's likely cleaner to also have a custom pte_next_pfn() on ppc, I just
>>>> hope that we don't lose any other arbitrary PTE bits by doing the pte_pgprot().
>>>
>>> I don't see the need for ppc to implement pte_next_pfn().
>>
>> Agreed.
> 
> So likely we should then do on top for powerpc (whitespace damage):
> 
> diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
> index a04ae4449a025..549a440ed7f65 100644
> --- a/arch/powerpc/mm/pgtable.c
> +++ b/arch/powerpc/mm/pgtable.c
> @@ -220,10 +220,7 @@ void set_ptes(struct mm_struct *mm, unsigned long addr,
> pte_t *ptep,
>                         break;
>                 ptep++;
>                 addr += PAGE_SIZE;
> -               /*
> -                * increment the pfn.
> -                */
> -               pte = pfn_pte(pte_pfn(pte) + 1, pte_pgprot((pte)));
> +               pte = pte_next_pfn(pte);
>         }
>  }

Looks like commit 47b8def9358c ("powerpc/mm: Avoid calling
arch_enter/leave_lazy_mmu() in set_ptes") changed from doing the simple
increment to this more complex approach, but the log doesn't say why.

>  
> 
> 




More information about the linux-riscv mailing list