[kernel-hardening] [PATCH 0/7] arm64: Privileged Access Never using TTBR0_EL1 switching

Catalin Marinas catalin.marinas at arm.com
Mon Aug 15 09:13:18 PDT 2016


On Mon, Aug 15, 2016 at 12:56:58PM +0200, Ard Biesheuvel wrote:
> On 15 August 2016 at 12:52, Catalin Marinas <catalin.marinas at arm.com> wrote:
> > On Mon, Aug 15, 2016 at 12:43:31PM +0200, Ard Biesheuvel wrote:
> >> But, how about we store the reserved ASID in TTBR1_EL1 instead, and
> >> switch TCR_EL1.A1 and TCR_EL1.EPD0 in a single write? That way, we can
> >> switch ASIDs and disable table walks atomically (I hope), and we
> >> wouldn't need to change TTBR0_EL1 at all.
> >
> > I did this before for AArch32 + LPAE (patches on the list sometime last
> > year I think). But the idea was nak'ed by the ARM architects. The
> > TCR_EL1.A1 can be cached somewhere in the TLB state machine, so you need
> > TLBI (IOW, toggling A1 does not guarantee an ASID switch).
> 
> But how is TTBR0_EL1 any different? The ARM ARM equally mentions that
> any of its field can be cached in a TLB, so by that reasoning, setting
> a new ASID in TTBR0_EL1 would also require TLB maintenance.

Not really because this register is also described as part of the
context switching operation, so that would be an exception to the
general rule of requiring TLB invalidation for cached registers.

If you keep reading the same paragraph, the ARM ARM becomes more
subjective ;) and you may come to the conclusion that the reserved ASID
(not TCR_EL1.A1 though) + TCR_EL1.EDP0 would do the trick but we need
clarification from the architects rather than my random interpretation:

Section "D4.7.1 General TLB maintenance requirements" states:

  Some System register field descriptions state that the effect of the
  field is permitted to be cached in a TLB. This means that all TLB
  entries that might be affected by a change of the field must be
  invalidated whenever that field is changed

So the above kind of implies that only TLB *entries* that might be
affected by a change of a control bit need to be invalidated and only
the effect of such bit is cached (rather than the bit itself). The
effect of EDP0==1 is that there is no page table walk on a miss, so
there won't be any new entries cached in the TLB that would
reflect/cache the effect of EDP0==1. We still need to follow this by a
switch to the reserved ASID to make sure there are no other TLB entries
for TTBR0 (and we shouldn't care about the window between EDP0=1 and
ASID=reserved).

Anyway, the above wouldn't help the code much since we still need to
preserve/restore/switch the ASID of the current thread (that's unless we
temporarily store TTBR0_EL1.ASID into the TTBR1_EL1.ASID field). The
TCR_EL1.A1 trick would have been nice but explicitly rejected by the
architects (I guess it's not part of the context switching sequence, so
the hardware may not notice the A1 bit change).

-- 
Catalin



More information about the linux-arm-kernel mailing list