[kernel-hardening] [PATCH 0/7] arm64: Privileged Access Never using TTBR0_EL1 switching
Will Deacon
will.deacon at arm.com
Mon Aug 15 04:02:54 PDT 2016
On Mon, Aug 15, 2016 at 12:56:58PM +0200, Ard Biesheuvel wrote:
> On 15 August 2016 at 12:52, Catalin Marinas <catalin.marinas at arm.com> wrote:
> > On Mon, Aug 15, 2016 at 12:43:31PM +0200, Ard Biesheuvel wrote:
> >> But, how about we store the reserved ASID in TTBR1_EL1 instead, and
> >> switch TCR_EL1.A1 and TCR_EL1.EPD0 in a single write? That way, we can
> >> switch ASIDs and disable table walks atomically (I hope), and we
> >> wouldn't need to change TTBR0_EL1 at all.
> >
> > I did this before for AArch32 + LPAE (patches on the list sometime last
> > year I think). But the idea was nak'ed by the ARM architects. The
> > TCR_EL1.A1 can be cached somewhere in the TLB state machine, so you need
> > TLBI (IOW, toggling A1 does not guarantee an ASID switch).
> >
>
> But how is TTBR0_EL1 any different? The ARM ARM equally mentions that
> any of its field can be cached in a TLB, so by that reasoning, setting
> a new ASID in TTBR0_EL1 would also require TLB maintenance.
The underlying issue is that "cached in a TLB" doesn't distinguish between
"cached in a TLB *entry*" and "cached by the logic used to form TLB entries"
(i.e. the table walker).
That's where we need to seek clarification from the architects, because
existing microarchitectures do utilise both of these types of caching.
Will
More information about the linux-arm-kernel
mailing list