LPA2 on non-LPA2 hardware broken with 16K pages

Will Deacon will at kernel.org
Tue Jul 23 09:05:43 PDT 2024


On Tue, Jul 23, 2024 at 05:02:15PM +0200, Ard Biesheuvel wrote:
> On Tue, 23 Jul 2024 at 16:52, Will Deacon <will at kernel.org> wrote:
> > On Fri, Jul 19, 2024 at 11:02:29AM -0700, Ard Biesheuvel wrote:
> > > Thanks for the cc, and thanks to Lina for the excellent diagnosis -
> > > this is really helpful.
> > >
> > > > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> > > > index f8efbc128446..3afe624a39e1 100644
> > > > --- a/arch/arm64/include/asm/pgtable.h
> > > > +++ b/arch/arm64/include/asm/pgtable.h
> > > > @@ -1065,6 +1065,13 @@ static inline bool pgtable_l5_enabled(void) { return false; }
> > > >
> > > >  #define p4d_offset_kimg(dir,addr)      ((p4d_t *)dir)
> > > >
> > > > +static inline
> > > > +p4d_t *p4d_offset_lockless(pgd_t *pgdp, pgd_t pgd, unsigned long addr)
> > >
> > > This is in the wrong place, I think - we already define this for the
> > > 5-level case (around line 1760).
> >
> > Hmm, I'm a bit confused. In my tree, we have one definition at line 1012,
> > which is for the 5-level case (i.e. guarded by
> > '#if CONFIG_PGTABLE_LEVELS > 4'). I'm adding a new one at line 1065,
> > which puts it in the '#else' block and means we use an override instead
> > of the problematic generic version when we're folding.
> >
> 
> Indeed. I failed to spot from the context (which is there in the diff)
> that this is in the else branch.

No worries.

> > > > +{
> > >
> > > We might add
> > >
> > > if (pgtable_l4_enabled())
> > >     pgdp = &pgd;
> > >
> > > here to preserve the existing 'lockless' behavior when PUDs are not
> > > folded.
> >
> > The code still needs to be 'lockless' for the 5-level case, so I don't
> > think this is necessary.
> 
> The 5-level case is never handled here.

Urgh, yes, sorry. I've done a fantasticly bad job of explaining myself.

> There is the 3-level case, where the runtime PUD folding needs the
> actual address in order to recalculate the descriptor address using
> the correct shift. In this case, we don't dereference the pointer
> anyway so the 'lockless' thing doesn't matter (afaict)
> 
> In the 4-level case, we want to preserve the original behavior, where
> pgd is not reloaded from pgdp. Setting pgdp to &pgd achieves that.

Right. What I'm trying to get at is the case where we have folding. For
example, with my patch applied, if we have 3 levels then the lockless
GUP walk looks like:


pgd_t pgd = READ_ONCE(*pgdp);

p4dp = p4d_offset_lockless(pgdp, pgd, addr);
	=> Returns pgdp
p4d_t p4d = READ_ONCE(*p4dp);

pudp = pud_offset_lockless(p4dp, p4d, addr);
	=> Returns &p4d, which is again the pgdp
pud_t pud = READ_ONCE(*pudp);


So here we're reloading the same pointer multiple times and my argument
is that if we need to add logic to avoid this for the
pgtable_l4_enabled() case, then we have bigger problems.

> > Yes, we'll load the same entry multiple times,
> > but it should be fine because they're in the context of a different
> > (albeit folded) level.
> >
> 
> I don't understand what you are saying here. Why is that fine?

I think it's fine because (a) the CPU guarantees same address
read-after-read ordering and (b) We only evaluate the most recently read
value. It would be a problem if we mixed data from different reads but,
because the use is confined to that 'level', we don't end up doing that.

Dunno, am I making any sense?

Will



More information about the linux-arm-kernel mailing list