[RFC PATCH 16/27] KVM: arm64: Prepare Hyp memory protection
Will Deacon
will at kernel.org
Mon Dec 7 06:10:14 EST 2020
On Mon, Dec 07, 2020 at 11:05:45AM +0000, Mark Rutland wrote:
> On Mon, Dec 07, 2020 at 10:20:03AM +0000, Will Deacon wrote:
> > On Fri, Dec 04, 2020 at 06:01:52PM +0000, Quentin Perret wrote:
> > > On Thursday 03 Dec 2020 at 12:57:33 (+0000), Fuad Tabba wrote:
> > > <snip>
> > > > > +SYM_FUNC_START(__kvm_init_switch_pgd)
> > > > > + /* Turn the MMU off */
> > > > > + pre_disable_mmu_workaround
> > > > > + mrs x2, sctlr_el2
> > > > > + bic x3, x2, #SCTLR_ELx_M
> > > > > + msr sctlr_el2, x3
> > > > > + isb
> > > > > +
> > > > > + tlbi alle2
> > > > > +
> > > > > + /* Install the new pgtables */
> > > > > + ldr x3, [x0, #NVHE_INIT_PGD_PA]
> > > > > + phys_to_ttbr x4, x3
> > > > > +alternative_if ARM64_HAS_CNP
> > > > > + orr x4, x4, #TTBR_CNP_BIT
> > > > > +alternative_else_nop_endif
> > > > > + msr ttbr0_el2, x4
> > > > > +
> > > > > + /* Set the new stack pointer */
> > > > > + ldr x0, [x0, #NVHE_INIT_STACK_HYP_VA]
> > > > > + mov sp, x0
> > > > > +
> > > > > + /* And turn the MMU back on! */
> > > > > + dsb nsh
> > > > > + isb
> > > > > + msr sctlr_el2, x2
> > > > > + isb
> > > > > + ret x1
> > > > > +SYM_FUNC_END(__kvm_init_switch_pgd)
> > > > > +
> > > >
> > > > Should the instruction cache be flushed here (ic iallu), to discard
> > > > speculatively fetched instructions?
> > >
> > > Hmm, Will? Thoughts?
> >
> > The I-cache is physically tagged, so not sure what invalidation would
> > achieve here. Fuad -- what do you think could go wrong specifically?
>
> While the MMU is off, instruction fetches can be made from the PoC
> rather than the PoU, so where instructions have been modified/copied and
> not cleaned to the PoC, it's possible to fetch stale copies into the
> I-caches. The physical tag doesn't prevent that.
Oh yeah, we even have a comment about that in
idmap_kpti_install_ng_mappings(). Maybe we should wrap disable_mmu and
enable_mmu in some macros so we don't have to trip over this every time (and
this would mean we could get rid of pre_disable_mmu_workaround too).
> In the regular CPU boot paths, __enabble_mmu() has an IC IALLU after
> enabling the MMU to ensure that we get rid of anything stale (e.g. so
> secondaries don't miss ftrace patching, which is only cleaned to the
> PoU).
>
> That might not be a problem here, if things are suitably padded and
> never dynamically patched, but if so it's probably worth a comment.
It's fragile enough that we should just do the invalidation.
Will
More information about the linux-arm-kernel
mailing list