[PATCH] arm64/mm: Drop ARM64_KERNEL_USES_PMD_MAPS

Anshuman Khandual anshuman.khandual at arm.com
Sun Sep 25 20:18:22 PDT 2022



On 9/23/22 19:08, Joey Gouly wrote:
> Hi Anshuman,
> 
> On Fri, Sep 23, 2022 at 06:38:41PM +0530, Anshuman Khandual wrote:
>> Currently ARM64_KERNEL_USES_PMD_MAPS is an unnecessary abstraction. Kernel
>> mapping at PMD (aka huge page aka block) level, is only applicable with 4K
>> base page, which makes it 2MB aligned, a necessary requirement for linear
>> mapping and physical memory start address. This can be easily achieved by
>> directly checking against base page size itself. This drops off the macro
>> ARM64_KERNE_USES_PMD_MAPS which is redundant.
>>
>> Cc: Catalin Marinas <catalin.marinas at arm.com>
>> Cc: Will Deacon <will at kernel.org>
>> Cc: linux-arm-kernel at lists.infradead.org
>> Signed-off-by: Anshuman Khandual <anshuman.khandual at arm.com>
>> ---
>> This applies on v6.0-rc6 after the following patch.
>>
>> https://lore.kernel.org/all/20220920014951.196191-1-wangkefeng.wang@huawei.com/
>>
>>  arch/arm64/include/asm/kernel-pgtable.h | 33 +++++++++----------------
>>  arch/arm64/mm/mmu.c                     |  2 +-
>>  2 files changed, 12 insertions(+), 23 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h
>> index 32d14f481f0c..5c2f72bae2ca 100644
>> --- a/arch/arm64/include/asm/kernel-pgtable.h
>> +++ b/arch/arm64/include/asm/kernel-pgtable.h
>> @@ -18,11 +18,6 @@
>>   * with 4K (section size = 2M) but not with 16K (section size = 32M) or
>>   * 64K (section size = 512M).
>>   */
>> -#ifdef CONFIG_ARM64_4K_PAGES
>> -#define ARM64_KERNEL_USES_PMD_MAPS 1
>> -#else
>> -#define ARM64_KERNEL_USES_PMD_MAPS 0
>> -#endif
> 
> There is now a dangling comment above this. I think it's quite a useful comment,
> so could be moved elsewhere if possible.

I have collected both these relevant comment paragraphs before the 4K switch.

> 
> Or maybe just keep ARM64_KERNEL_USES_PMD_MAPS because it's not a big abstraction
> and it's more obvious to why there's differences in SWAPPER_BLOCK_SIZE etc. 

The decision about kernel mapping granularity is static i.e depends just on
base page size. If that decision needs to be remembered at all in form of an
abstraction, it can be achieved via a new config option such as the following
rather than a macro.

config ARM64_KERNEL_USES_PMD_MAPS
	default y
	depends on CONFIG_ARM64_4K_PAGES

> 
>>  
>>  /*
>>   * The idmap and swapper page tables need some space reserved in the kernel
>> @@ -34,10 +29,20 @@
>>   * VA range, so pages required to map highest possible PA are reserved in all
>>   * cases.
>>   */
>> -#if ARM64_KERNEL_USES_PMD_MAPS
>> +#ifdef CONFIG_ARM64_4K_PAGES
>>  #define SWAPPER_PGTABLE_LEVELS	(CONFIG_PGTABLE_LEVELS - 1)
>> +#define SWAPPER_BLOCK_SHIFT	PMD_SHIFT
>> +#define SWAPPER_BLOCK_SIZE	PMD_SIZE
>> +#define SWAPPER_TABLE_SHIFT	PUD_SHIFT
>> +#define SWAPPER_RW_MMUFLAGS	(PMD_ATTRINDX(MT_NORMAL) | SWAPPER_PMD_FLAGS)
>> +#define SWAPPER_RX_MMUFLAGS	(SWAPPER_RW_MMUFLAGS | PMD_SECT_RDONLY)
>>  #else
>>  #define SWAPPER_PGTABLE_LEVELS	(CONFIG_PGTABLE_LEVELS)
>> +#define SWAPPER_BLOCK_SHIFT	PAGE_SHIFT
>> +#define SWAPPER_BLOCK_SIZE	PAGE_SIZE
>> +#define SWAPPER_TABLE_SHIFT	PMD_SHIFT
>> +#define SWAPPER_RW_MMUFLAGS	(PTE_ATTRINDX(MT_NORMAL) | SWAPPER_PTE_FLAGS)
>> +#define SWAPPER_RX_MMUFLAGS	(SWAPPER_RW_MMUFLAGS | PTE_RDONLY)
>>  #endif
>>  
>>  
>> @@ -96,15 +101,6 @@
>>  #define INIT_IDMAP_DIR_PAGES	EARLY_PAGES(KIMAGE_VADDR, _end + MAX_FDT_SIZE + SWAPPER_BLOCK_SIZE, 1)
>>  
>>  /* Initial memory map size */
>> -#if ARM64_KERNEL_USES_PMD_MAPS
>> -#define SWAPPER_BLOCK_SHIFT	PMD_SHIFT
>> -#define SWAPPER_BLOCK_SIZE	PMD_SIZE
>> -#define SWAPPER_TABLE_SHIFT	PUD_SHIFT
>> -#else
>> -#define SWAPPER_BLOCK_SHIFT	PAGE_SHIFT
>> -#define SWAPPER_BLOCK_SIZE	PAGE_SIZE
>> -#define SWAPPER_TABLE_SHIFT	PMD_SHIFT
>> -#endif
> 
> Also a dangling comment here.

These ? can be dropped off without much problem.

/* Initial memory map size */
/*
 * Initial memory map attributes.
 */

Will try to re-arrange these comments next time around.

- Anshuman

> 
> Thanks,
> Joey
> 
>>  
>>  /*
>>   * Initial memory map attributes.
>> @@ -112,13 +108,6 @@
>>  #define SWAPPER_PTE_FLAGS	(PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
>>  #define SWAPPER_PMD_FLAGS	(PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
>>  
>> -#if ARM64_KERNEL_USES_PMD_MAPS
>> -#define SWAPPER_RW_MMUFLAGS	(PMD_ATTRINDX(MT_NORMAL) | SWAPPER_PMD_FLAGS)
>> -#define SWAPPER_RX_MMUFLAGS	(SWAPPER_RW_MMUFLAGS | PMD_SECT_RDONLY)
>> -#else
>> -#define SWAPPER_RW_MMUFLAGS	(PTE_ATTRINDX(MT_NORMAL) | SWAPPER_PTE_FLAGS)
>> -#define SWAPPER_RX_MMUFLAGS	(SWAPPER_RW_MMUFLAGS | PTE_RDONLY)
>> -#endif
>>  
>>  /*
>>   * To make optimal use of block mappings when laying out the linear
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index 69deed27dec8..df1eac788c33 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -1192,7 +1192,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
>>  
>>  	WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END));
>>  
>> -	if (!ARM64_KERNEL_USES_PMD_MAPS)
>> +	if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES))
>>  		return vmemmap_populate_basepages(start, end, node, altmap);
>>  
>>  	do {



More information about the linux-arm-kernel mailing list