[PATCH v3 3/7] mm: Implement for_each_valid_pfn() for CONFIG_SPARSEMEM
Mike Rapoport
rppt at kernel.org
Wed Apr 23 04:11:45 PDT 2025
On Wed, Apr 23, 2025 at 08:52:45AM +0100, David Woodhouse wrote:
> From: David Woodhouse <dwmw at amazon.co.uk>
>
> Implement for_each_valid_pfn() based on two helper functions.
>
> The first_valid_pfn() function largely mirrors pfn_valid(), calling into
> a pfn_section_first_valid() helper which is trivial for the !VMEMMAP case,
> and in the VMEMMAP case will skip to the next subsection as needed.
>
> Since next_valid_pfn() knows that its argument *is* a valid PFN, it
> doesn't need to do any checking at all while iterating over the low bits
> within a (sub)section mask; the whole (sub)section is either present or
> not.
>
> Note that the VMEMMAP version of pfn_section_first_valid() may return a
> value *higher* than end_pfn when skipping to the next subsection, and
> first_valid_pfn() happily returns that higher value. This is fine.
>
> Signed-off-by: David Woodhouse <dwmw at amazon.co.uk>
> Previous-revision-reviewed-by: Mike Rapoport (Microsoft) <rppt at kernel.org>
> ---
> include/asm-generic/memory_model.h | 26 ++++++++--
> include/linux/mmzone.h | 78 ++++++++++++++++++++++++++++++
> 2 files changed, 99 insertions(+), 5 deletions(-)
>
> diff --git a/include/asm-generic/memory_model.h b/include/asm-generic/memory_model.h
> index 74d0077cc5fa..044536da3390 100644
> --- a/include/asm-generic/memory_model.h
> +++ b/include/asm-generic/memory_model.h
> @@ -31,12 +31,28 @@ static inline int pfn_valid(unsigned long pfn)
> }
> #define pfn_valid pfn_valid
>
> +static inline bool first_valid_pfn(unsigned long *pfn)
> +{
> + /* avoid <linux/mm.h> include hell */
> + extern unsigned long max_mapnr;
> + unsigned long pfn_offset = ARCH_PFN_OFFSET;
> +
> + if (*pfn < pfn_offset) {
> + *pfn = pfn_offset;
> + return true;
> + }
> +
> + if ((*pfn - pfn_offset) < max_mapnr)
> + return true;
> +
> + return false;
> +}
> +
Looks like it's a leftover from one of the previous versions.
> #ifndef for_each_valid_pfn
> -#define for_each_valid_pfn(pfn, start_pfn, end_pfn) \
> - for ((pfn) = max_t(unsigned long, (start_pfn), ARCH_PFN_OFFSET); \
> - (pfn) < min_t(unsigned long, (end_pfn), \
> - ARCH_PFN_OFFSET + max_mapnr); \
> - (pfn)++)
> +#define for_each_valid_pfn(pfn, start_pfn, end_pfn) \
> + for (pfn = max_t(unsigned long, start_pfn, ARCH_PFN_OFFSET); \
> + pfn < min_t(unsigned long, end_pfn, ARCH_PFN_OFFSET + max_mapnr); \
> + pfn++)
And this one is probably a rebase artifact?
With FLATMEM changes dropped
This-revision-also-reviewed-by: Mike Rapoport (Microsoft) <rppt at kernel.org>
> #endif /* for_each_valid_pfn */
> #endif /* valid_pfn */
>
--
Sincerely yours,
Mike.
More information about the linux-arm-kernel
mailing list