[PATCH v3] riscv: Add support to allocate gigantic hugepages using CMA
Palmer Dabbelt
palmer at dabbelt.com
Fri Aug 13 22:00:19 PDT 2021
On Fri, 30 Jul 2021 05:48:41 PDT (-0700), wangkefeng.wang at huawei.com wrote:
> Commit 9e953cda5cdf ("riscv: Introduce huge page support for32/64bit
> kernel") adds support gigantic hugepage support for RV64.
>
> This patch adds support to allocate gigantic hugepages using CMA by
> specifying the hugetlb_cma= kernel parameter on RV64.
>
> Cc: Alexandre Ghiti <alex at ghiti.fr>
> Reviewed-by: Alexandre Ghiti <alex at ghiti.fr>
> Signed-off-by: Kefeng Wang <wangkefeng.wang at huawei.com>
> ---
> v3: Update changelog and add Reviewed-by.
> arch/riscv/mm/init.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
> index a14bf3910eec..e547e53cddd2 100644
> --- a/arch/riscv/mm/init.c
> +++ b/arch/riscv/mm/init.c
> @@ -19,6 +19,7 @@
> #include <linux/set_memory.h>
> #include <linux/dma-map-ops.h>
> #include <linux/crash_dump.h>
> +#include <linux/hugetlb.h>
>
> #include <asm/fixmap.h>
> #include <asm/tlbflush.h>
> @@ -216,6 +217,8 @@ static void __init setup_bootmem(void)
>
> early_init_fdt_scan_reserved_mem();
> dma_contiguous_reserve(dma32_phys_limit);
> + if (IS_ENABLED(CONFIG_64BIT))
> + hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
> memblock_allow_resize();
> }
Thanks. This is on for-next. I cleaned up the commit text a bit.
More information about the linux-riscv
mailing list