[PATCH 2/2] arm64/mm: Reorganize pfn_valid()
David Hildenbrand
david at redhat.com
Fri Jan 29 05:07:04 EST 2021
On 29.01.21 08:39, Anshuman Khandual wrote:
> There are multiple instances of pfn_to_section_nr() and __pfn_to_section()
> when CONFIG_SPARSEMEM is enabled. This can be just optimized if the memory
> section is fetched earlier. Hence bifurcate pfn_valid() into two different
> definitions depending on whether CONFIG_SPARSEMEM is enabled. Also replace
> the open coded pfn <--> addr conversion with __[pfn|phys]_to_[phys|pfn]().
> This does not cause any functional change.
>
> Cc: Catalin Marinas <catalin.marinas at arm.com>
> Cc: Will Deacon <will at kernel.org>
> Cc: Ard Biesheuvel <ardb at kernel.org>
> Cc: linux-arm-kernel at lists.infradead.org
> Cc: linux-kernel at vger.kernel.org
> Signed-off-by: Anshuman Khandual <anshuman.khandual at arm.com>
> ---
> arch/arm64/mm/init.c | 38 +++++++++++++++++++++++++++++++-------
> 1 file changed, 31 insertions(+), 7 deletions(-)
>
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 1141075e4d53..09adca90c57a 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -217,18 +217,25 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
> free_area_init(max_zone_pfns);
> }
>
> +#ifdef CONFIG_SPARSEMEM
> int pfn_valid(unsigned long pfn)
> {
> - phys_addr_t addr = pfn << PAGE_SHIFT;
> + struct mem_section *ms = __pfn_to_section(pfn);
> + phys_addr_t addr = __pfn_to_phys(pfn);
I'd just use PFN_PHYS() here, which is more frequently used in the kernel.
>
> - if ((addr >> PAGE_SHIFT) != pfn)
> + /*
> + * Ensure the upper PAGE_SHIFT bits are clear in the
> + * pfn. Else it might lead to false positives when
> + * some of the upper bits are set, but the lower bits
> + * match a valid pfn.
> + */
> + if (__phys_to_pfn(addr) != pfn)
and here PHYS_PFN(). Comment is helpful. :)
> return 0;
>
> -#ifdef CONFIG_SPARSEMEM
> if (pfn_to_section_nr(pfn) >= NR_MEM_SECTIONS)
> return 0;
>
> - if (!valid_section(__pfn_to_section(pfn)))
> + if (!valid_section(ms))
> return 0;
>
> /*
> @@ -240,11 +247,28 @@ int pfn_valid(unsigned long pfn)
> * memory sections covering all of hotplug memory including
> * both normal and ZONE_DEVICE based.
> */
> - if (!early_section(__pfn_to_section(pfn)))
> - return pfn_section_valid(__pfn_to_section(pfn), pfn);
> -#endif
> + if (!early_section(ms))
> + return pfn_section_valid(ms, pfn);
> +
> return memblock_is_map_memory(addr);
> }
> +#else
> +int pfn_valid(unsigned long pfn)
> +{
> + phys_addr_t addr = __pfn_to_phys(pfn);
> +
> + /*
> + * Ensure the upper PAGE_SHIFT bits are clear in the
> + * pfn. Else it might lead to false positives when
> + * some of the upper bits are set, but the lower bits
> + * match a valid pfn.
> + */
> + if (__phys_to_pfn(addr) != pfn)
> + return 0;
> +
> + return memblock_is_map_memory(addr);
> +}
I think you can avoid duplicating the code by doing something like:
phys_addr_t addr = PFN_PHYS(pfn);
if (PHYS_PFN(addr) != pfn)
return 0;
#ifdef CONFIG_SPARSEMEM
{
struct mem_section *ms = __pfn_to_section(pfn);
if (!valid_section(ms))
return 0;
if (!early_section(ms))
return pfn_section_valid(ms, pfn);
}
#endif
return memblock_is_map_memory(addr);
--
Thanks,
David / dhildenb
More information about the linux-arm-kernel
mailing list