[PATCH v14 01/11] x86: kdump: replace the hard-coded alignment with macro CRASH_ALIGN
Chen Zhou
chenzhou10 at huawei.com
Sat Jan 30 02:10:15 EST 2021
Move CRASH_ALIGN to header asm/kexec.h for later use. Besides, the
alignment of crash kernel regions in x86 is 16M(CRASH_ALIGN), but
function reserve_crashkernel() also used 1M alignment. So just
replace hard-coded alignment 1M with macro CRASH_ALIGN.
Suggested-by: Dave Young <dyoung at redhat.com>
Suggested-by: Baoquan He <bhe at redhat.com>
Signed-off-by: Chen Zhou <chenzhou10 at huawei.com>
Tested-by: John Donnelly <John.p.donnelly at oracle.com>
---
arch/x86/include/asm/kexec.h | 3 +++
arch/x86/kernel/setup.c | 5 +----
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
index 6802c59e8252..be18dc7ae51f 100644
--- a/arch/x86/include/asm/kexec.h
+++ b/arch/x86/include/asm/kexec.h
@@ -18,6 +18,9 @@
# define KEXEC_CONTROL_CODE_MAX_SIZE 2048
+/* 16M alignment for crash kernel regions */
+#define CRASH_ALIGN SZ_16M
+
#ifndef __ASSEMBLY__
#include <linux/string.h>
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 3412c4595efd..da769845597d 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -390,9 +390,6 @@ static void __init memblock_x86_reserve_range_setup_data(void)
#ifdef CONFIG_KEXEC_CORE
-/* 16M alignment for crash kernel regions */
-#define CRASH_ALIGN SZ_16M
-
/*
* Keep the crash kernel below this limit.
*
@@ -510,7 +507,7 @@ static void __init reserve_crashkernel(void)
} else {
unsigned long long start;
- start = memblock_phys_alloc_range(crash_size, SZ_1M, crash_base,
+ start = memblock_phys_alloc_range(crash_size, CRASH_ALIGN, crash_base,
crash_base + crash_size);
if (start != crash_base) {
pr_info("crashkernel reservation failed - memory is in use.\n");
--
2.20.1
More information about the kexec
mailing list