[PATCH 1/5] KVM: arm64: Drop direct PAGE_[SHIFT|SIZE] usage as page size
Anshuman Khandual
anshuman.khandual at arm.com
Tue Aug 10 22:34:46 PDT 2021
On 8/10/21 7:03 PM, Marc Zyngier wrote:
> On 2021-08-10 08:02, Anshuman Khandual wrote:
>> All instances here could just directly test against CONFIG_ARM64_XXK_PAGES
>> instead of evaluating via PAGE_SHIFT or PAGE_SIZE. With this change, there
>> will be no such usage left.
>>
>> Cc: Marc Zyngier <maz at kernel.org>
>> Cc: James Morse <james.morse at arm.com>
>> Cc: Alexandru Elisei <alexandru.elisei at arm.com>
>> Cc: Suzuki K Poulose <suzuki.poulose at arm.com>
>> Cc: Catalin Marinas <catalin.marinas at arm.com>
>> Cc: Will Deacon <will at kernel.org>
>> Cc: linux-arm-kernel at lists.infradead.org
>> Cc: kvmarm at lists.cs.columbia.edu
>> Cc: linux-kernel at vger.kernel.org
>> Signed-off-by: Anshuman Khandual <anshuman.khandual at arm.com>
>> ---
>> arch/arm64/kvm/hyp/pgtable.c | 6 +++---
>> arch/arm64/mm/mmu.c | 2 +-
>> 2 files changed, 4 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
>> index 05321f4165e3..a6112b6d6ef6 100644
>> --- a/arch/arm64/kvm/hyp/pgtable.c
>> +++ b/arch/arm64/kvm/hyp/pgtable.c
>> @@ -85,7 +85,7 @@ static bool kvm_level_supports_block_mapping(u32 level)
>> * Reject invalid block mappings and don't bother with 4TB mappings for
>> * 52-bit PAs.
>> */
>> - return !(level == 0 || (PAGE_SIZE != SZ_4K && level == 1));
>> + return !(level == 0 || (!IS_ENABLED(CONFIG_ARM64_4K_PAGES) && level == 1));
>> }
>>
>> static bool kvm_block_mapping_supported(u64 addr, u64 end, u64 phys, u32 level)
>> @@ -155,7 +155,7 @@ static u64 kvm_pte_to_phys(kvm_pte_t pte)
>> {
>> u64 pa = pte & KVM_PTE_ADDR_MASK;
>>
>> - if (PAGE_SHIFT == 16)
>> + if (IS_ENABLED(CONFIG_ARM64_64K_PAGES))
>> pa |= FIELD_GET(KVM_PTE_ADDR_51_48, pte) << 48;
>>
>> return pa;
>> @@ -165,7 +165,7 @@ static kvm_pte_t kvm_phys_to_pte(u64 pa)
>> {
>> kvm_pte_t pte = pa & KVM_PTE_ADDR_MASK;
>>
>> - if (PAGE_SHIFT == 16)
>> + if (IS_ENABLED(CONFIG_ARM64_64K_PAGES))
>> pte |= FIELD_PREP(KVM_PTE_ADDR_51_48, pa >> 48);
>>
>> return pte;
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index 9ff0de1b2b93..8fdfca179815 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -296,7 +296,7 @@ static void alloc_init_cont_pmd(pud_t *pudp,
>> unsigned long addr,
>> static inline bool use_1G_block(unsigned long addr, unsigned long next,
>> unsigned long phys)
>> {
>> - if (PAGE_SHIFT != 12)
>> + if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES))
>> return false;
>>
>> if (((addr | next | phys) & ~PUD_MASK) != 0)
>
> I personally find it a lot less readable.
>
> Also, there is no evaluation whatsoever. All the code guarded
> by a PAGE_SIZE/PAGE_SHIFT that doesn't match the configuration
> is dropped at compile time.
The primary idea here is to unify around IS_ENABLED(CONFIG_ARM64_XXK_PAGES)
usage in arm64, rather than having multiple methods to test page size when
ever required.
More information about the linux-arm-kernel
mailing list