[PATCH] arm64: pageattr: Explicitly bail out when changing permissions for vmalloc_huge mappings
Yang Shi
yang at os.amperecomputing.com
Fri Oct 10 08:52:14 PDT 2025
On 10/10/25 2:52 AM, Ryan Roberts wrote:
> Hi Yang,
>
>
> On 09/10/2025 21:26, Yang Shi wrote:
>>
>> On 3/27/25 11:21 PM, Dev Jain wrote:
>>> arm64 uses apply_to_page_range to change permissions for kernel VA mappings,
>>> which does not support changing permissions for leaf mappings. This function
>>> will change permissions until it encounters a leaf mapping, and will bail
>>> out. To avoid this partial change, explicitly disallow changing permissions
>>> for VM_ALLOW_HUGE_VMAP mappings.
>>>
>>> Signed-off-by: Dev Jain <dev.jain at arm.com>
>>> ---
>>> arch/arm64/mm/pageattr.c | 4 ++--
>>> 1 file changed, 2 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
>>> index 39fd1f7ff02a..8337c88eec69 100644
>>> --- a/arch/arm64/mm/pageattr.c
>>> +++ b/arch/arm64/mm/pageattr.c
>>> @@ -96,7 +96,7 @@ static int change_memory_common(unsigned long addr, int
>>> numpages,
>>> * we are operating on does not result in such splitting.
>>> *
>>> * Let's restrict ourselves to mappings created by vmalloc (or vmap).
>>> - * Those are guaranteed to consist entirely of page mappings, and
>>> + * Disallow VM_ALLOW_HUGE_VMAP vmalloc mappings so that
>>> * splitting is never needed.
>>> *
>>> * So check whether the [addr, addr + size) interval is entirely
>>> @@ -105,7 +105,7 @@ static int change_memory_common(unsigned long addr, int
>>> numpages,
>>> area = find_vm_area((void *)addr);
>>> if (!area ||
>>> end > (unsigned long)kasan_reset_tag(area->addr) + area->size ||
>>> - !(area->flags & VM_ALLOC))
>>> + ((area->flags & (VM_ALLOC | VM_ALLOW_HUGE_VMAP)) != VM_ALLOC))
>>> return -EINVAL;
>> I happened to find this patch when I was looking into fixing "splitting is never
>> needed" comment to reflect the latest change with BBML2_NOABORT and tried to
>> relax this restriction. I agree with the justification for this patch to make
>> the code more robust for permission update on partial range. But the following
>> linear mapping permission update code seems still assume partial range update
>> never happens:
>>
>> for (i = 0; i < area->nr_pages; i++) {
>>
>> It iterates all pages for this vm area from the first page then update their
>> permissions. So I think we should do the below to make it more robust to partial
>> range update like this patch did:
> Ahh so the issue is that [addr, addr + numpages * PAGE_SIZE) may only cover a
> portion of the vm area? But the current code updates the permissions for the
> whole vm area? Ouch...
Yes. I didn't see anyone actually does partial range update as the
earlier discussion said, but this is another "footgun waiting to go off"
too. We'd better to get aligned with this patch.
>
>> --- a/arch/arm64/mm/pageattr.c
>> +++ b/arch/arm64/mm/pageattr.c
>> @@ -185,8 +185,9 @@ static int change_memory_common(unsigned long addr, int
>> numpages,
>> */
>> if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY ||
>> pgprot_val(clear_mask) == PTE_RDONLY)) {
>> - for (i = 0; i < area->nr_pages; i++) {
>> - __change_memory_common((u64)page_address(area->pages[i]),
>> + unsigned long idx = (start - (unsigned long)area->addr) >>
>> PAGE_SHIFT;
>> + for (i = 0; i < numpages; i++) {
>> + __change_memory_common((u64)page_address(area->pages[idx++]),
>> PAGE_SIZE, set_mask, clear_mask);
>> }
>> }
>>
>> Just build tested. Does it look reasonable?
> Yes that looks correct to me! Will you submit a patch?
Yes. I will prepare the patches once -rc1 is available.
Thanks,
Yang
>
> Thanks,
> Ryan
>
>> Thanks,
>> Yang
>>
>>
>>> if (!numpages)
More information about the linux-arm-kernel
mailing list