[kernel-hardening] [PATCH 0/7] arm64: Privileged Access Never using TTBR0_EL1 switching

Will Deacon will.deacon at arm.com
Mon Aug 15 03:21:41 PDT 2016


On Mon, Aug 15, 2016 at 11:15:28AM +0100, Mark Rutland wrote:
> On Mon, Aug 15, 2016 at 11:10:09AM +0100, Will Deacon wrote:
> > On Mon, Aug 15, 2016 at 11:06:49AM +0100, Mark Rutland wrote:
> > > On Mon, Aug 15, 2016 at 12:02:33PM +0200, Ard Biesheuvel wrote:
> > > > On 15 August 2016 at 11:58, Mark Rutland <mark.rutland at arm.com> wrote:
> > > > > On Mon, Aug 15, 2016 at 10:48:42AM +0100, Catalin Marinas wrote:
> > > > >> On Sat, Aug 13, 2016 at 11:13:58AM +0200, Ard Biesheuvel wrote:
> > > > >> > On 12 August 2016 at 17:27, Catalin Marinas <catalin.marinas at arm.com> wrote:
> > > > >> > > This is the first (public) attempt at emulating PAN by disabling
> > > > >> > > TTBR0_EL1 accesses on arm64.
> > > > >> >
> > > > >> > I take it using TCR_EL1.EPD0 is too expensive?
> > > > >>
> > > > >> It would require full TLB invalidation on entering/exiting the kernel
> > > > >> and again for any user access. That's because the architecture allows
> > > > >> this bit to be cached in the TLB so without TLBI we wouldn't have any
> > > > >> guarantee that the actual PAN was toggled. I'm not sure it's even clear
> > > > >> whether a TLBI by ASID or a local one would suffice (likely OK for the
> > > > >> latter).
> > > > >
> > > > > It's worth noting that even ignoring the TLB-caching of TCR_EL1.EPD0, the
> > > > > control only affects the behaviour on a TLB miss. Thus to use EPD0 we'd at
> > > > > least need TLB invalidation by ASID to remove previously-allocated entries from
> > > > > TLBs.
> > > > 
> > > > ... or update the ASID to the reserved ASID in TTBR0_EL1, but leave
> > > > the actual TTBR address alone.
> > > > 
> > > > This would remove the need for a zero page, and for recording the
> > > > original TTBR address in a per-cpu variable.
> > > 
> > > That's a good point, and a better approach.
> > > 
> > > Unfortunately, we're still left with the issue that TCR_EL1.* can be cached in
> > > a TLB, as Catalin pointed out. Which at minimum would require a TLBI ASIDE1,
> > > and may require something stronger, given the precise rules for TLB-cached
> > > fields isn't clear.
> > 
> > I suggest we get this clarified before merging the patch, as even the
> > author admits that it's ugly ;)
> 
> Just to be clear, you want to try the EPD0 approach in preference to Catalin's
> current zero-page approach (which is safe regardless as it doesn't poke TCR.*)?

I'd like to be sure that there's not a cleaner approach per the
architecture, yes. That means getting clarification wrt what TLB
invalidation is actually required if we are to try setting EPD0. I have
a horrible feeling that it will be TLBIALLE1, but we should confirm that.

Will



More information about the linux-arm-kernel mailing list