[kernel-hardening] [PATCH 0/7] arm64: Privileged Access Never using TTBR0_EL1 switching

Will Deacon will.deacon at arm.com
Mon Aug 15 03:08:38 PDT 2016


On Mon, Aug 15, 2016 at 12:02:33PM +0200, Ard Biesheuvel wrote:
> On 15 August 2016 at 11:58, Mark Rutland <mark.rutland at arm.com> wrote:
> > On Mon, Aug 15, 2016 at 10:48:42AM +0100, Catalin Marinas wrote:
> >> On Sat, Aug 13, 2016 at 11:13:58AM +0200, Ard Biesheuvel wrote:
> >> > On 12 August 2016 at 17:27, Catalin Marinas <catalin.marinas at arm.com> wrote:
> >> > > This is the first (public) attempt at emulating PAN by disabling
> >> > > TTBR0_EL1 accesses on arm64.
> >> >
> >> > I take it using TCR_EL1.EPD0 is too expensive?
> >>
> >> It would require full TLB invalidation on entering/exiting the kernel
> >> and again for any user access. That's because the architecture allows
> >> this bit to be cached in the TLB so without TLBI we wouldn't have any
> >> guarantee that the actual PAN was toggled. I'm not sure it's even clear
> >> whether a TLBI by ASID or a local one would suffice (likely OK for the
> >> latter).
> >
> > It's worth noting that even ignoring the TLB-caching of TCR_EL1.EPD0, the
> > control only affects the behaviour on a TLB miss. Thus to use EPD0 we'd at
> > least need TLB invalidation by ASID to remove previously-allocated entries from
> > TLBs.
> >
> 
> ... or update the ASID to the reserved ASID in TTBR0_EL1, but leave
> the actual TTBR address alone.
> 
> This would remove the need for a zero page, and for recording the
> original TTBR address in a per-cpu variable.

Hmm, how does that work? The ASID and EPDx are in different registers,
so there's still a window where we could get speculative TLB fills.

Will



More information about the linux-arm-kernel mailing list