[PATCH v1] arm64/mm: Make set_ptes() robust when OAs cross 48-bit boundary

Ryan Roberts ryan.roberts at arm.com
Thu Jan 25 09:11:27 PST 2024


On 25/01/2024 17:07, Catalin Marinas wrote:
> On Tue, Jan 23, 2024 at 04:17:18PM +0000, Ryan Roberts wrote:
>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
>> index 79ce70fbb751..734b39401a05 100644
>> --- a/arch/arm64/include/asm/pgtable.h
>> +++ b/arch/arm64/include/asm/pgtable.h
>> @@ -92,6 +92,14 @@ static inline pteval_t __phys_to_pte_val(phys_addr_t phys)
>>  #define pfn_pte(pfn,prot)	\
>>  	__pte(__phys_to_pte_val((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot))
>>
>> +/*
>> + * Select all bits except the pfn
>> + */
>> +static inline pgprot_t pte_pgprot(pte_t pte)
>> +{
>> +	return __pgprot(pte_val(pte) & ~PTE_ADDR_MASK);
>> +}
>> +
>>  #define pte_none(pte)		(!pte_val(pte))
>>  #define pte_clear(mm,addr,ptep)	set_pte(ptep, __pte(0))
>>  #define pte_page(pte)		(pfn_to_page(pte_pfn(pte)))
>> @@ -341,6 +349,12 @@ static inline void __sync_cache_and_tags(pte_t pte, unsigned int nr_pages)
>>  		mte_sync_tags(pte, nr_pages);
>>  }
>>
>> +#define pte_next_pfn pte_next_pfn
>> +static inline pte_t pte_next_pfn(pte_t pte)
>> +{
>> +	return pfn_pte(pte_pfn(pte) + 1, pte_pgprot(pte));
>> +}
> 
> While I see why you wanted to optimise this, I'd rather keep the
> pte_pgprot() change separate and at a later time. This will conflict
> (fail to build) with Ard's patch removing PTE_ADDR_MASK:

OK fair enough. I'll respin it without the pte_pgprot() change.

Thanks for the review.

> 
> https://lore.kernel.org/all/20240123145258.1462979-89-ardb+git@google.com/
> 
> This masking out is no longer straightforward with support for LPA2
> (especially the 52-bit physical addresses with 4K pages): bits 8 and 9
> of the PTE either contain bits 50, 51 of the PA or the shareability
> attribute if FEAT_LPA2 is not present. In the latter case, we need them
> preserved.
> 




More information about the linux-arm-kernel mailing list