[PATCH v3 11/26] x86/numa: use get_pfn_range_for_nid to verify that node spans memory
Dan Williams
dan.j.williams at intel.com
Mon Aug 5 13:03:56 PDT 2024
Mike Rapoport wrote:
> From: "Mike Rapoport (Microsoft)" <rppt at kernel.org>
>
> Instead of looping over numa_meminfo array to detect node's start and
> end addresses use get_pfn_range_for_init().
>
> This is shorter and make it easier to lift numa_memblks to generic code.
>
> Signed-off-by: Mike Rapoport (Microsoft) <rppt at kernel.org>
> Tested-by: Zi Yan <ziy at nvidia.com> # for x86_64 and arm64
> ---
> arch/x86/mm/numa.c | 13 +++----------
> 1 file changed, 3 insertions(+), 10 deletions(-)
>
> diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c
> index edfc38803779..cfe7e5477cf8 100644
> --- a/arch/x86/mm/numa.c
> +++ b/arch/x86/mm/numa.c
> @@ -521,17 +521,10 @@ static int __init numa_register_memblks(struct numa_meminfo *mi)
>
> /* Finally register nodes. */
> for_each_node_mask(nid, node_possible_map) {
> - u64 start = PFN_PHYS(max_pfn);
> - u64 end = 0;
> + unsigned long start_pfn, end_pfn;
>
> - for (i = 0; i < mi->nr_blks; i++) {
> - if (nid != mi->blk[i].nid)
> - continue;
> - start = min(mi->blk[i].start, start);
> - end = max(mi->blk[i].end, end);
> - }
> -
> - if (start >= end)
> + get_pfn_range_for_nid(nid, &start_pfn, &end_pfn);
> + if (start_pfn >= end_pfn)
Assuming I understand why this works, would it be worth a comment like:
"Note, get_pfn_range_for_nid() depends on memblock_set_node() having
already happened"
...at least that context was not part of the diff so took me second to
figure out how this works.
More information about the linux-riscv
mailing list