[PATCH v4 7/7] mm/mm_init: Use for_each_valid_pfn() in init_unavailable_range()

David Hildenbrand david at redhat.com
Mon Apr 28 00:12:57 PDT 2025


On 26.04.25 01:04, David Woodhouse wrote:
> On Fri, 2025-04-25 at 22:12 +0200, David Hildenbrand wrote:
>>
>> In any case, trying to figure out why Lorenzo ran into an issue ... if
>> it's nit because of the pageblock, maybe something in for_each_valid_pfn
>> with sparsemem is still shaky.
> 
> Yep, I think this was it:
> 
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -2190,10 +2190,10 @@ static inline unsigned long next_valid_pfn(unsigned long pfn, unsigned long end_
>          /*
>           * Either every PFN within the section (or subsection for VMEMMAP) is
>           * valid, or none of them are. So there's no point repeating the check
> -        * for every PFN; only call first_valid_pfn() the first time, and when
> -        * crossing a (sub)section boundary (i.e. !(pfn & ~PFN_VALID_MASK)).
> +        * for every PFN; only call first_valid_pfn() again when crossing a
> +        * (sub)section boundary (i.e. !(pfn & ~PAGE_{SUB,}SECTION_MASK)).
>           */
> -       if (pfn & (IS_ENABLED(CONFIG_SPARSEMEM_VMEMMAP) ?
> +       if (pfn & ~(IS_ENABLED(CONFIG_SPARSEMEM_VMEMMAP) ?
>                     PAGE_SUBSECTION_MASK : PAGE_SECTION_MASK))


LGTM, although we could make way this easier to understand:

Something like:


unsigned long pfn_mask = PAGE_SECTION_MASK;

if (IS_ENABLED(CONFIG_SPARSEMEM_VMEMMAP)
	pfn_mask = PAGE_SUBSECTION_MASK;

if (pfn & ~pfn_mask)
	...

-- 
Cheers,

David / dhildenb




More information about the linux-arm-kernel mailing list