[PATCH 0/2] arm64: permit KASLR in linear region even VArange == PArange

Kefeng Wang wangkefeng.wang at huawei.com
Mon Feb 14 18:09:40 PST 2022


On 2021/12/16 16:56, Ard Biesheuvel wrote:
> (+ Marc)
>
> On Thu, 16 Dec 2021 at 08:37, Kefeng Wang <wangkefeng.wang at huawei.com> wrote:
>>
>> On 2021/12/15 22:52, Ard Biesheuvel wrote:
>>> Kefeng reports in [0] that using PArange to size the randomized linear
>>> region offset leads to cases where randomization is no longer possible
>>> even if the actual placement of DRAM in memory would otherwise have
>>> permitted it.
>>>
>>> Instead of using CONFIG_MEMORY_HOTPLUG to choose at build time between
>>> to different behaviors in this regard, let's try addressing this by
>>> reducing the minimum relative aligment between VA and PA in the linear
>>> region, and taking advantage of the space at the base of physical memory
>>> below the first memblock to permit some randomization of the placement
>>> of physical DRAM in the virtual address map.
>> VArange == PArange is ok, but our case is Va=39/Pa=48, this is still not
>> works :(
>>
>> Could we add a way(maybe cmdline) to set max parange, then we could make
>>
>> randomization works, or some other way?
>>
> We could, but it is not a very elegant way to recover this
> randomization range. You would need to reduce the PArange to 36 bits
> (which is the next valid option below 40) in order to ensure that a
> 39-bit VA kernel has some room for randomization, but this would not
> work on many systems because they require 40-bit physical addressing,
> due to the placement of DRAM in the PA space, not the DRAM size.
>
> Android 5.10 is in the same boat (and needs CONFIG_MEMORY_HOTPLUG=y)
> so I agree we need something better here.
>
>
Could we reuse "linux,usable-memory-range" property?

For now,this property is only used to determine available memory for 
crash dump kernel.

For first kernel, we could this property to determine all available 
physical memory,

including hotplug memory range, also we must make sure that the range is 
reasonable in the fdt.


Here is the draft, how about this way?


diff --git a/Documentation/devicetree/bindings/chosen.txt 
b/Documentation/devicetree/bindings/chosen.txt
index 1cc3aa10dcb1..18ab9046dcd0 100644
--- a/Documentation/devicetree/bindings/chosen.txt
+++ b/Documentation/devicetree/bindings/chosen.txt
@@ -99,8 +99,12 @@ The main usage is for crash dump kernel to identify 
its own usable
  memory and exclude, at its boot time, any other memory areas that are
  part of the panicked kernel's memory.

-While this property does not represent a real hardware, the address
-and the size are expressed in #address-cells and #size-cells,
+When it is used for first kernel(arm64 only, option), it must contains all
+the physical memory range, including the hotplug memory, the range will
+be used for calculating the max randomization range of the linear region
+if CONFIG_RANDOMIZE_BASE enabled on arm64.
+
+The address and the size are expressed in #address-cells and #size-cells,
  respectively, of the root node.

  linux,elfcorehdr
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index db63cc885771..a8f7d619550b 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -189,6 +189,40 @@ static int __init early_mem(char *p)
  }
  early_param("mem", early_mem);

+static void arm64_randomize_linear_region_setup(void)
+{
+    s64 range, usable_start, usable_size, mmfr0;
+    extern u16 memstart_offset_seed;
+    int parange;
+
+    if (!IS_ENABLED(CONFIG_RANDOMIZE_BASE) || memstart_offset_seed == 0)
+        return;
+
+    mmfr0 = read_cpuid(ID_AA64MMFR0_EL1);
+    parange = cpuid_feature_extract_unsigned_field(
+                mfr0, ID_AA64MMFR0_PARANGE_SHIFT);
+    range = BIT(id_aa64mmfr0_parange_to_phys_shift(parange))
+
+    of_get_usable_mem_range(&usable_start, &usable_size);
+    if (!usable_size || usable_start + usable_size > range)
+        usable_size = range;
+
+    range = linear_region_size - usable_size;
+
+    /*
+     * If the size of the linear region exceeds, by a sufficient
+     * margin, the size of the region that the physical memory can
+     * span, randomize the linear region as well.
+     */
+    if (range >= (s64)ARM64_MEMSTART_ALIGN) {
+        range /= ARM64_MEMSTART_ALIGN;
+        memstart_addr -= ARM64_MEMSTART_ALIGN *
+                 ((range * memstart_offset_seed) >> 16);
+    } else {
+        pr_warn("linear mappings size is too small for KASLR\n");
+    }
+}
+
  void __init arm64_memblock_init(void)
  {
      s64 linear_region_size = PAGE_END - _PAGE_OFFSET(vabits_actual);
@@ -282,25 +316,7 @@ void __init arm64_memblock_init(void)
          }
      }

-    if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
-        extern u16 memstart_offset_seed;
-        u64 mmfr0 = read_cpuid(ID_AA64MMFR0_EL1);
-        int parange = cpuid_feature_extract_unsigned_field(
-                    mmfr0, ID_AA64MMFR0_PARANGE_SHIFT);
-        s64 range = linear_region_size -
-                BIT(id_aa64mmfr0_parange_to_phys_shift(parange));
-
-        /*
-         * If the size of the linear region exceeds, by a sufficient
-         * margin, the size of the region that the physical memory can
-         * span, randomize the linear region as well.
-         */
-        if (memstart_offset_seed > 0 && range >= 
(s64)ARM64_MEMSTART_ALIGN) {
-            range /= ARM64_MEMSTART_ALIGN;
-            memstart_addr -= ARM64_MEMSTART_ALIGN *
-                     ((range * memstart_offset_seed) >> 16);
-        }
-    }
+    arm64_randomize_linear_region_setup();

      /*
       * Register the kernel text, kernel data, initrd, and initial
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index acfae9b41cc8..0a53ff9d5766 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -1450,6 +1450,11 @@ struct range arch_get_mappable_range(void)
      u64 end_linear_pa = __pa(PAGE_END - 1);

      if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
+        u64 usable_start, usable_size;
+        of_get_usable_mem_size(&usable_start, &usable_size);
+        if (usable_size)
+            end_linear_pa = min(end_linear_pa, usable_start + usable_size);
+
          /*
           * Check for a wrap, it is possible because of randomized linear
           * mapping the start physical address is actually bigger than
diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
index ad85ff6474ff..191011912ced 100644
--- a/drivers/of/fdt.c
+++ b/drivers/of/fdt.c
@@ -972,6 +972,14 @@ static void __init 
early_init_dt_check_for_elfcorehdr(unsigned long node)
  }

  static unsigned long chosen_node_offset = -FDT_ERR_NOTFOUND;
+static phys_addr_t cap_mem_addr __ro_after_init;
+static phys_addr_t cap_mem_size __ro_after_init;
+
+void of_get_usable_mem_range(phys_addr_t *usable_start, phys_addr_t 
*usable_size)
+{
+    *usable_start = cap_mem_addr;
+    *usable_size = cap_mem_size;
+}

  /**
   * early_init_dt_check_for_usable_mem_range - Decode usable memory range
@@ -981,8 +989,6 @@ void __init 
early_init_dt_check_for_usable_mem_range(void)
  {
      const __be32 *prop;
      int len;
-    phys_addr_t cap_mem_addr;
-    phys_addr_t cap_mem_size;
      unsigned long node = chosen_node_offset;

      if ((long)node < 0)
diff --git a/include/linux/of_fdt.h b/include/linux/of_fdt.h
index d69ad5bb1eb1..be9b9e5a693f 100644
--- a/include/linux/of_fdt.h
+++ b/include/linux/of_fdt.h
@@ -83,6 +83,7 @@ extern void unflatten_device_tree(void);
  extern void unflatten_and_copy_device_tree(void);
  extern void early_init_devtree(void *);
  extern void early_get_first_memblock_info(void *, phys_addr_t *);
+extern void of_get_usable_mem_range(phys_addr_t *usable_start, 
phys_addr_t *usable_size);
  #else /* CONFIG_OF_EARLY_FLATTREE */
  static inline void early_init_dt_check_for_usable_mem_range(void) {}
  static inline int early_init_dt_scan_chosen_stdout(void) { return 
-ENODEV; }
@@ -91,6 +92,7 @@ static inline void early_init_fdt_reserve_self(void) {}
  static inline const char *of_flat_dt_get_machine_name(void) { return 
NULL; }
  static inline void unflatten_device_tree(void) {}
  static inline void unflatten_and_copy_device_tree(void) {}
+extern void of_get_usable_mem_range(phys_addr_t *usable_start, 
phys_addr_t *usable_size) {}
  #endif /* CONFIG_OF_EARLY_FLATTREE */

  #endif /* __ASSEMBLY__ */


>>> Cc: Kefeng Wang <wangkefeng.wang at huawei.com>
>>>
>>> [0] https://lore.kernel.org/linux-arm-kernel/20211104062747.55206-1-wangkefeng.wang@huawei.com/
>>>
>>> Ard Biesheuvel (2):
>>>     arm64: simplify rules for defining ARM64_MEMSTART_ALIGN
>>>     arm64: kaslr: take free space at start of DRAM into account
>>>
>>>    arch/arm64/include/asm/kernel-pgtable.h | 27 +++-----------------
>>>    arch/arm64/mm/init.c                    |  3 ++-
>>>    2 files changed, 6 insertions(+), 24 deletions(-)
>>>
> .



More information about the linux-arm-kernel mailing list