[PATCH v4 1/3] arm64: Improve kernel address detection of __is_lm_address()

Vincenzo Frascino vincenzo.frascino at arm.com
Mon Jan 25 09:36:34 EST 2021


Hi Mark,

On 1/25/21 1:02 PM, Mark Rutland wrote:
> Hi Vincenzo,
> 
> On Fri, Jan 22, 2021 at 03:56:40PM +0000, Vincenzo Frascino wrote:
>> Currently, the __is_lm_address() check just masks out the top 12 bits
>> of the address, but if they are 0, it still yields a true result.
>> This has as a side effect that virt_addr_valid() returns true even for
>> invalid virtual addresses (e.g. 0x0).
>>
>> Improve the detection checking that it's actually a kernel address
>> starting at PAGE_OFFSET.
>>
>> Cc: Catalin Marinas <catalin.marinas at arm.com>
>> Cc: Will Deacon <will at kernel.org>
>> Suggested-by: Catalin Marinas <catalin.marinas at arm.com>
>> Reviewed-by: Catalin Marinas <catalin.marinas at arm.com>
>> Signed-off-by: Vincenzo Frascino <vincenzo.frascino at arm.com>
> 
> Looking around, it seems that there are some existing uses of
> virt_addr_valid() that expect it to reject addresses outside of the
> TTBR1 range. For example, check_mem_type() in drivers/tee/optee/call.c.
> 
> Given that, I think we need something that's easy to backport to stable.
> 

I agree, I started looking at it this morning and I found cases even in the main
allocators (slub and page_alloc) either then the one you mentioned.

> This patch itself looks fine, but it's not going to backport very far,
> so I suspect we might need to write a preparatory patch that adds an
> explicit range check to virt_addr_valid() which can be trivially
> backported.
> 

I checked the old releases and I agree this is not back-portable as it stands.
I propose therefore to add a preparatory patch with the check below:

#define __is_ttrb1_address(addr)	((u64)(addr) >= PAGE_OFFSET && \
					(u64)(addr) < PAGE_END)

If it works for you I am happy to take care of it and post a new version of my
patches.

Thanks!

> For this patch:
> 
> Acked-by: Mark Rutland <mark.rutland at arm.com>
> 
> Thanks,
> Mark.
> 
>> ---
>>  arch/arm64/include/asm/memory.h | 6 ++++--
>>  1 file changed, 4 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
>> index 18fce223b67b..99d7e1494aaa 100644
>> --- a/arch/arm64/include/asm/memory.h
>> +++ b/arch/arm64/include/asm/memory.h
>> @@ -247,9 +247,11 @@ static inline const void *__tag_set(const void *addr, u8 tag)
>>  
>>  
>>  /*
>> - * The linear kernel range starts at the bottom of the virtual address space.
>> + * Check whether an arbitrary address is within the linear map, which
>> + * lives in the [PAGE_OFFSET, PAGE_END) interval at the bottom of the
>> + * kernel's TTBR1 address range.
>>   */
>> -#define __is_lm_address(addr)	(((u64)(addr) & ~PAGE_OFFSET) < (PAGE_END - PAGE_OFFSET))
>> +#define __is_lm_address(addr)	(((u64)(addr) ^ PAGE_OFFSET) < (PAGE_END - PAGE_OFFSET))
>>  
>>  #define __lm_to_phys(addr)	(((addr) & ~PAGE_OFFSET) + PHYS_OFFSET)
>>  #define __kimg_to_phys(addr)	((addr) - kimage_voffset)
>> -- 
>> 2.30.0
>>

-- 
Regards,
Vincenzo



More information about the linux-arm-kernel mailing list