[PATCH v2] arm64: mm: increase VA range of identity map
Catalin Marinas
catalin.marinas at arm.com
Wed Feb 25 04:38:14 PST 2015
On Tue, Feb 24, 2015 at 05:08:23PM +0000, Ard Biesheuvel wrote:
> --- a/arch/arm64/kernel/head.S
> +++ b/arch/arm64/kernel/head.S
> @@ -387,6 +387,28 @@ __create_page_tables:
> mov x0, x25 // idmap_pg_dir
> ldr x3, =KERNEL_START
> add x3, x3, x28 // __pa(KERNEL_START)
> +
> +#ifndef CONFIG_ARM64_VA_BITS_48
> +#define EXTRA_SHIFT (PGDIR_SHIFT + PAGE_SHIFT - 3)
> + /*
> + * If VA_BITS < 48, it may be too small to allow for an ID mapping to be
> + * created that covers system RAM if that is located sufficiently high
> + * in the physical address space. So for the ID map, use an extended
> + * virtual range in that case, by configuring an additional translation
> + * level.
> + */
> + adrp x5, KERNEL_END
> + clz x5, x5 // # of leading 0's in __pa(KERNEL_END)
> + cmp x5, #TCR_T0SZ(VA_BITS) // VA_BITS sufficiently large?
I think we need some better comment here for people looking at this code
again in the future (including myself). So the VA bits needed to cover
KERNEL_END are calculated as (64 - clz(__pa(KERNEL_END))). T0SZ is
calculated as (64 - VA_BITS) which means that T0SZ is
clz(__pa(KERNEL_END)).
> + b.ge 1f // .. then skip additional level
> +
> + adrp x6, idmap_t0sz
> + str x5, [x6, #:lo12:idmap_t0sz]
> +
> + create_table_entry x0, x3, EXTRA_SHIFT, PTRS_PER_PGD, x5, x6
EXTRA_SHIFT is correctly calculated (one level more than PGDIR_SHIFT)
but I don't think PTRS_PER_PGD is appropriate here. For example, we had
in the past 39-bit VA with 64KB pages which made PTRS_PER_PGD pretty
small (well, it may cover the 40-bit case you are trying to fix but not
up to 48). So maybe define something like:
#define EXTRA_PTRS (1 << (48 - EXTRA_SHIFT))
Otherwise the patch looks fine to me. We have to look into fixing KVM as
well, ideally with sharing the same idmap with the kernel.
--
Catalin
More information about the linux-arm-kernel
mailing list