[PATCHv5 09/17] mm/sparse: Check memmap alignment for compound_info_has_mask()

Zi Yan ziy at nvidia.com
Wed Jan 28 19:10:23 PST 2026


On 28 Jan 2026, at 22:00, Muchun Song wrote:

>> On Jan 28, 2026, at 21:54, Kiryl Shutsemau <kas at kernel.org> wrote:
>>
>> If page->compound_info encodes a mask, it is expected that vmemmap to be
>> naturally aligned to the maximum folio size.
>>
>> Trigger a BUG() for CONFIG_DEBUG_VM=y or WARN() otherwise.
>>
>> Signed-off-by: Kiryl Shutsemau <kas at kernel.org>
>> Acked-by: Zi Yan <ziy at nvidia.com>
>> ---
>> mm/sparse.c | 13 +++++++++++++
>> 1 file changed, 13 insertions(+)
>>
>> diff --git a/mm/sparse.c b/mm/sparse.c
>> index b5b2b6f7041b..9c0f4015778c 100644
>> --- a/mm/sparse.c
>> +++ b/mm/sparse.c
>> @@ -600,6 +600,19 @@ void __init sparse_init(void)
>> BUILD_BUG_ON(!is_power_of_2(sizeof(struct mem_section)));
>> 	memblocks_present();
>>
>> + 	if (compound_info_has_mask()) {
>> + 		unsigned long alignment;
>> + 		bool aligned;
>> +
>> + 		alignment = MAX_FOLIO_NR_PAGES * sizeof(struct page);
>> + 		aligned = IS_ALIGNED((unsigned long) pfn_to_page(0), alignment);
>> +
>> + 		if (IS_ENABLED(CONFIG_DEBUG_VM))
>> + 			BUG_ON(!aligned);
>> + 		else
>> + 			WARN_ON(!aligned);
>
> Since you’ve fixed all the problematic architectures, I don’t believe
> we’ll ever hit the WARN or BUG here anymore.
>
> I think we can now simplify the code further and just use VM_BUG_ON:
> if any architecture changes in the future, the misalignment will be
> caught during testing, so we won’t need to worry about it at run-time.
>

VM_WARN_ON should be sufficient, since bots should report warnings
from any patch/change.

>> + 	}
>> +
>> 	pnum_begin = first_present_section_nr();
>> 	nid_begin = sparse_early_nid(__nr_to_section(pnum_begin));
>>
>> -- 
>> 2.51.2
>>


Best Regards,
Yan, Zi



More information about the linux-riscv mailing list