[PATCH V4 3/4] mm/sparse-vmemmap: Generalise vmemmap_populate_hugepages()

Huacai Chen chenhuacai at kernel.org
Fri Jul 8 02:47:56 PDT 2022


+Dan Williams
+Sudarshan Rajagopalan

On Thu, Jul 7, 2022 at 12:17 AM Will Deacon <will at kernel.org> wrote:
>
> On Tue, Jul 05, 2022 at 09:07:59PM +0800, Huacai Chen wrote:
> > On Tue, Jul 5, 2022 at 5:29 PM Will Deacon <will at kernel.org> wrote:
> > > On Mon, Jul 04, 2022 at 07:25:25PM +0800, Huacai Chen wrote:
> > > > diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
> > > > index 33e2a1ceee72..6f2e40bb695d 100644
> > > > --- a/mm/sparse-vmemmap.c
> > > > +++ b/mm/sparse-vmemmap.c
> > > > @@ -686,6 +686,60 @@ int __meminit vmemmap_populate_basepages(unsigned long start, unsigned long end,
> > > >       return vmemmap_populate_range(start, end, node, altmap, NULL);
> > > >  }
> > > >
> > > > +void __weak __meminit vmemmap_set_pmd(pmd_t *pmd, void *p, int node,
> > > > +                                   unsigned long addr, unsigned long next)
> > > > +{
> > > > +}
> > > > +
> > > > +int __weak __meminit vmemmap_check_pmd(pmd_t *pmd, int node, unsigned long addr,
> > > > +                                    unsigned long next)
> > > > +{
> > > > +     return 0;
> > > > +}
> > > > +
> > > > +int __meminit vmemmap_populate_hugepages(unsigned long start, unsigned long end,
> > > > +                                      int node, struct vmem_altmap *altmap)
> > > > +{
> > > > +     unsigned long addr;
> > > > +     unsigned long next;
> > > > +     pgd_t *pgd;
> > > > +     p4d_t *p4d;
> > > > +     pud_t *pud;
> > > > +     pmd_t *pmd;
> > > > +
> > > > +     for (addr = start; addr < end; addr = next) {
> > > > +             next = pmd_addr_end(addr, end);
> > > > +
> > > > +             pgd = vmemmap_pgd_populate(addr, node);
> > > > +             if (!pgd)
> > > > +                     return -ENOMEM;
> > > > +
> > > > +             p4d = vmemmap_p4d_populate(pgd, addr, node);
> > > > +             if (!p4d)
> > > > +                     return -ENOMEM;
> > > > +
> > > > +             pud = vmemmap_pud_populate(p4d, addr, node);
> > > > +             if (!pud)
> > > > +                     return -ENOMEM;
> > > > +
> > > > +             pmd = pmd_offset(pud, addr);
> > > > +             if (pmd_none(READ_ONCE(*pmd))) {
> > > > +                     void *p;
> > > > +
> > > > +                     p = vmemmap_alloc_block_buf(PMD_SIZE, node, altmap);
> > > > +                     if (p) {
> > > > +                             vmemmap_set_pmd(pmd, p, node, addr, next);
> > > > +                             continue;
> > > > +                     } else if (altmap)
> > > > +                             return -ENOMEM; /* no fallback */
> > >
> > > Why do you return -ENOMEM if 'altmap' here? That seems to be different to
> > > what we currently have on arm64 and it's not clear to me why we're happy
> > > with an altmap for the pmd case, but not for the pte case.
> > The generic version is the same as X86. It seems that ARM64 always
> > fallback whether there is an altmap, but X86 only fallback in the no
> > altmap case. I don't know the reason of X86, can Dan Williams give
> > some explaination?
>
> Right, I think we need to understand the new behaviour here before we adopt
> it on arm64.
Hi, Dan,
Could you please tell us the reason? Thanks.

And Sudarshan,
You are the author of adding a fallback mechanism to ARM64,  do you
know why ARM64 is different from X86 (only fallback in no altmap
case)?

Huacai

>
> Will



More information about the linux-arm-kernel mailing list