[PATCH v2 2/4] arm64: mm: extend linear region for 52-bit VA configurations

Steve Capper Steve.Capper at arm.com
Tue Oct 13 12:51:39 EDT 2020


Hi Ard,

One comment below...

On 08/10/2020 16:36, Ard Biesheuvel wrote:
> For historical reasons, the arm64 kernel VA space is configured as two
> equally sized halves, i.e., on a 48-bit VA build, the VA space is split
> into a 47-bit vmalloc region and a 47-bit linear region.
> 
> When support for 52-bit virtual addressing was added, this equal split
> was kept, resulting in a substantial waste of virtual address space in
> the linear region:
> 
>                             48-bit VA                     52-bit VA
>    0xffff_ffff_ffff_ffff +-------------+               +-------------+
>                          |   vmalloc   |               |   vmalloc   |
>    0xffff_8000_0000_0000 +-------------+ _PAGE_END(48) +-------------+
>                          |   linear    |               :             :
>    0xffff_0000_0000_0000 +-------------+               :             :
>                          :             :               :             :
>                          :             :               :             :
>                          :             :               :             :
>                          :             :               :  currently  :
>                          :  unusable   :               :             :
>                          :             :               :   unused    :
>                          :     by      :               :             :
>                          :             :               :             :
>                          :  hardware   :               :             :
>                          :             :               :             :
>    0xfff8_0000_0000_0000 :             : _PAGE_END(52) +-------------+
>                          :             :               |             |
>                          :             :               |             |
>                          :             :               |             |
>                          :             :               |             |
>                          :             :               |             |
>                          :  unusable   :               |             |
>                          :             :               |   linear    |
>                          :     by      :               |             |
>                          :             :               |   region    |
>                          :  hardware   :               |             |
>                          :             :               |             |
>                          :             :               |             |
>                          :             :               |             |
>                          :             :               |             |
>                          :             :               |             |
>                          :             :               |             |
>    0xfff0_0000_0000_0000 +-------------+  PAGE_OFFSET  +-------------+
> 
> As illustrated above, the 52-bit VA kernel uses 47 bits for the vmalloc
> space (as before), to ensure that a single 64k granule kernel image can
> support any 64k granule capable system, regardless of whether it supports
> the 52-bit virtual addressing extension. However, due to the fact that
> the VA space is still split in equal halves, the linear region is only
> 2^51 bytes in size, wasting almost half of the 52-bit VA space.
> 
> Let's fix this, by abandoning the equal split, and simply assigning all
> VA space outside of the vmalloc region to the linear region.
> 
> The KASAN shadow region is reconfigured so that it ends at the start of
> the vmalloc region, and grows downwards. That way, the arrangement of
> the vmalloc space (which contains kernel mappings, modules, BPF region,
> the vmemmap array etc) is identical between non-KASAN and KASAN builds,
> which aids debugging.
> 
> Signed-off-by: Ard Biesheuvel <ardb at kernel.org>
> ---
>   Documentation/arm64/kasan-offsets.sh |  3 +--
>   Documentation/arm64/memory.rst       | 19 +++++++++----------
>   arch/arm64/Kconfig                   | 20 ++++++++++----------
>   arch/arm64/include/asm/memory.h      | 12 +++++-------
>   arch/arm64/mm/init.c                 |  2 +-
>   5 files changed, 26 insertions(+), 30 deletions(-)
> 

[...]

> diff --git a/Documentation/arm64/memory.rst b/Documentation/arm64/memory.rst
> index cf03b3290800..ee51eb66a578 100644
> --- a/Documentation/arm64/memory.rst
> +++ b/Documentation/arm64/memory.rst
> @@ -32,10 +32,10 @@ AArch64 Linux memory layout with 4KB pages + 4 levels (48-bit)::
>     -----------------------------------------------------------------------
>     0000000000000000	0000ffffffffffff	 256TB		user
>     ffff000000000000	ffff7fffffffffff	 128TB		kernel logical memory map
> -  ffff800000000000	ffff9fffffffffff	  32TB		kasan shadow region
> -  ffffa00000000000	ffffa00007ffffff	 128MB		bpf jit region
> -  ffffa00008000000	ffffa0000fffffff	 128MB		modules
> -  ffffa00010000000	fffffdffbffeffff	 ~93TB		vmalloc
> +[ ffff600000000000	ffff7fffffffffff ]	  32TB		[ kasan shadow region ]

We have the KASAN shadow region intersecting with the kernel logical 
memory map now. Could this present a problem if both the KASAN and 
phys_to_virt paths land on the same VA? Or is that not possible? (Also 
what about smaller VA sizes?)

If it is a problem, could carving out the appropriate memblocks (and 
warning the user) be a way forward?

Cheers,
--
Steve



More information about the linux-arm-kernel mailing list