[v2 PATCH] arm64: mm: make linear mapping permission update more robust for patial range

Yang Shi yang at os.amperecomputing.com
Tue Nov 18 15:34:46 PST 2025



On 11/18/25 3:07 PM, Nathan Chancellor wrote:
> On Tue, Nov 18, 2025 at 09:35:08AM -0800, Yang Shi wrote:
>> Thanks for reporting this problem. It looks like I forgot to use untagged
>> address when calculating idx.
>>
>> Can you please try the below patch?
>>
>> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
>> index 08ac96b9f846..0f6417e3f9f1 100644
>> --- a/arch/arm64/mm/pageattr.c
>> +++ b/arch/arm64/mm/pageattr.c
>> @@ -183,7 +183,7 @@ static int change_memory_common(unsigned long addr, int
>> numpages,
>>           */
>>          if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY ||
>>                              pgprot_val(clear_mask) == PTE_RDONLY)) {
>> -               unsigned long idx = (start - (unsigned long)area->addr) >>
>> PAGE_SHIFT;
>> +               unsigned long idx = (start - (unsigned
>> long)kasan_reset_tag(area->addr)) >> PAGE_SHIFT;
>>                  for (; numpages; idx++, numpages--) {
>> __change_memory_common((u64)page_address(area->pages[idx]),
>>                                                 PAGE_SIZE, set_mask,
>> clear_mask);
> Yes, that appears to resolve the issue for me, thanks for the quick fix!
>
> If a formal tag helps:
>
> Tested-by: Nathan Chancellor <nathan at kernel.org>

Thank you. I will prepare a formal patch soon.

Yang

>
> Cheers,
> Nathan




More information about the linux-arm-kernel mailing list