[PATCH v1 01/11] arm/pgtable: define PFN_PTE_SHIFT on arm and arm64

David Hildenbrand david at redhat.com
Tue Jan 23 02:48:31 PST 2024


On 23.01.24 11:34, Ryan Roberts wrote:
> On 22/01/2024 19:41, David Hildenbrand wrote:
>> We want to make use of pte_next_pfn() outside of set_ptes(). Let's
>> simpliy define PFN_PTE_SHIFT, required by pte_next_pfn().
>>
>> Signed-off-by: David Hildenbrand <david at redhat.com>
>> ---
>>   arch/arm/include/asm/pgtable.h   | 2 ++
>>   arch/arm64/include/asm/pgtable.h | 2 ++
>>   2 files changed, 4 insertions(+)
>>
>> diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
>> index d657b84b6bf70..be91e376df79e 100644
>> --- a/arch/arm/include/asm/pgtable.h
>> +++ b/arch/arm/include/asm/pgtable.h
>> @@ -209,6 +209,8 @@ static inline void __sync_icache_dcache(pte_t pteval)
>>   extern void __sync_icache_dcache(pte_t pteval);
>>   #endif
>>   
>> +#define PFN_PTE_SHIFT		PAGE_SHIFT
>> +
>>   void set_ptes(struct mm_struct *mm, unsigned long addr,
>>   		      pte_t *ptep, pte_t pteval, unsigned int nr);
>>   #define set_ptes set_ptes
>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
>> index 79ce70fbb751c..d4b3bd96e3304 100644
>> --- a/arch/arm64/include/asm/pgtable.h
>> +++ b/arch/arm64/include/asm/pgtable.h
>> @@ -341,6 +341,8 @@ static inline void __sync_cache_and_tags(pte_t pte, unsigned int nr_pages)
>>   		mte_sync_tags(pte, nr_pages);
>>   }
>>   
>> +#define PFN_PTE_SHIFT		PAGE_SHIFT
> 
> I think this is buggy. And so is the arm64 implementation of set_ptes(). It
> works fine for 48-bit output address, but for 52-bit OAs, the high bits are not
> kept contigously, so if you happen to be setting a mapping for which the
> physical memory block straddles bit 48, this won't work.

Right, as soon as the PTE bits are not contiguous, this stops working, 
just like set_ptes() would, which I used as orientation.

> 
> Today, only the 64K base page config can support 52 bits, and for this,
> OA[51:48] are stored in PTE[15:12]. But 52 bits for 4K and 16K base pages is
> coming (hopefully v6.9) and in this case OA[51:50] are stored in PTE[9:8].
> Fortunately we already have helpers in arm64 to abstract this.
> 
> So I think arm64 will want to define its own pte_next_pfn():
> 
> #define pte_next_pfn pte_next_pfn
> static inline pte_t pte_next_pfn(pte_t pte)
> {
> 	return pfn_pte(pte_pfn(pte) + 1, pte_pgprot(pte));
> }
> 
> I'll do a separate patch to fix the already broken arm64 set_ptes() implementation.

Make sense.

> 
> I'm not sure if this type of problem might also apply to other arches?

I saw similar handling in the PPC implementation of set_ptes, but was 
not able to convince me that it is actually required there.

pte_pfn on ppc does:

static inline unsigned long pte_pfn(pte_t pte)
{
	return (pte_val(pte) & PTE_RPN_MASK) >> PTE_RPN_SHIFT;
}

But that means that the PFNs *are* contiguous. If high bits are used for 
something else, then we might produce a garbage PTE on overflow, but 
that shouldn't really matter I concluded for folio_pte_batch() purposes, 
we'd not detect "belongs to this folio batch" either way.

Maybe it's likely cleaner to also have a custom pte_next_pfn() on ppc, I 
just hope that we don't lose any other arbitrary PTE bits by doing the 
pte_pgprot().


I guess pte_pfn() implementations should tell us if anything special 
needs to happen.

-- 
Cheers,

David / dhildenb




More information about the linux-riscv mailing list