[PATCH 2/3] KVM: arm64: Generate final CTR_EL0 value when running in Protected mode

Marc Zyngier maz at kernel.org
Mon Mar 22 18:37:14 GMT 2021


On Mon, 22 Mar 2021 17:40:40 +0000,
Quentin Perret <qperret at google.com> wrote:
> 
> Hey Marc,
> 
> On Monday 22 Mar 2021 at 16:48:27 (+0000), Marc Zyngier wrote:
> > In protected mode, late CPUs are not allowed to boot (enforced by
> > the PSCI relay). We can thus specialise the read_ctr macro to
> > always return a pre-computed, sanitised value.
> > 
> > Signed-off-by: Marc Zyngier <maz at kernel.org>
> > ---
> >  arch/arm64/include/asm/assembler.h | 9 +++++++++
> >  arch/arm64/kernel/image-vars.h     | 1 +
> >  arch/arm64/kvm/va_layout.c         | 7 +++++++
> >  3 files changed, 17 insertions(+)
> > 
> > diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
> > index fb651c1f26e9..1a4cee7eb3c9 100644
> > --- a/arch/arm64/include/asm/assembler.h
> > +++ b/arch/arm64/include/asm/assembler.h
> > @@ -270,12 +270,21 @@ alternative_endif
> >   * provide the system wide safe value from arm64_ftr_reg_ctrel0.sys_val
> >   */
> >  	.macro	read_ctr, reg
> > +#ifndef __KVM_NVHE_HYPERVISOR__
> >  alternative_if_not ARM64_MISMATCHED_CACHE_TYPE
> >  	mrs	\reg, ctr_el0			// read CTR
> >  	nop
> >  alternative_else
> >  	ldr_l	\reg, arm64_ftr_reg_ctrel0 + ARM64_FTR_SYSVAL
> >  alternative_endif
> > +#else
> > +alternative_cb kvm_compute_final_ctr_el0
> > +	movz	\reg, #0
> > +	movk	\reg, #0, lsl #16
> > +	movk	\reg, #0, lsl #32
> > +	movk	\reg, #0, lsl #48
> > +alternative_cb_end
> > +#endif
> >  	.endm
> 
> So, FWIW, if we wanted to make _this_ macro BUG in non-protected mode
> (and drop patch 01), I think we could do something like:
> 
> alternative_cb kvm_compute_final_ctr_el0
> 	movz	\reg, #0
> 	ASM_BUG()
> 	nop
> 	nop
> alternative_cb_end
>
> and then make kvm_compute_final_ctr_el0() check that we're in protected
> mode before patching. That would be marginally better as that would
> cover _all_ users of read_ctr and not just __flush_dcache_area, but that
> first movz is a bit yuck (but necessary to keep generate_mov_q() happy I
> think?), so I'll leave the decision to you.

Can't say I'm keen on the yucky bit, but here's an alternative (ha!)
for you:

diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index 1a4cee7eb3c9..7582c3bd2f05 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -278,6 +278,9 @@ alternative_else
 	ldr_l	\reg, arm64_ftr_reg_ctrel0 + ARM64_FTR_SYSVAL
 alternative_endif
 #else
+alternative_if_not ARM64_KVM_PROTECTED_MODE
+	ASM_BUG()
+alternative_else_nop_endif
 alternative_cb kvm_compute_final_ctr_el0
 	movz	\reg, #0
 	movk	\reg, #0, lsl #16

Yes, it is one more instruction, but it is cleaner and allows us to
from the first patch of the series.

What do you think?

	M.

-- 
Without deviation from the norm, progress is not possible.



More information about the linux-arm-kernel mailing list