[PATCH v2 4/7] KVM: arm64: Store the maximum sve vector length at hyp

Fuad Tabba tabba at google.com
Wed May 22 07:36:13 PDT 2024


Hi Marc,

On Tue, May 21, 2024 at 10:21 PM Marc Zyngier <maz at kernel.org> wrote:
>
> On Tue, 21 May 2024 17:37:17 +0100,
> Fuad Tabba <tabba at google.com> wrote:
> >
> > In subsequent patches, hyp needs to know the maximum sve vector
> > length for the host, without needing the trust the host for that.
> > This is used when allocating memory for the host sve state in the
> > following patch, as well as for allocating and restricting guest
> > sve state in a future patch series.
> >
> > Signed-off-by: Fuad Tabba <tabba at google.com>
> > ---
> >  arch/arm64/include/asm/kvm_host.h | 1 +
> >  arch/arm64/include/asm/kvm_hyp.h  | 1 +
> >  arch/arm64/kvm/arm.c              | 1 +
> >  arch/arm64/kvm/hyp/nvhe/pkvm.c    | 2 ++
> >  arch/arm64/kvm/reset.c            | 2 ++
> >  5 files changed, 7 insertions(+)
> >
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index 8170c04fde91..0a5fceb20f3a 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -76,6 +76,7 @@ static inline enum kvm_mode kvm_get_mode(void) { return KVM_MODE_NONE; };
> >  DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use);
> >
> >  extern unsigned int __ro_after_init kvm_sve_max_vl;
> > +extern unsigned int __ro_after_init kvm_host_sve_max_vl;
> >  int __init kvm_arm_init_sve(void);
> >
> >  u32 __attribute_const__ kvm_target_cpu(void);
> > diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
> > index 2ab23589339a..d313adf53bef 100644
> > --- a/arch/arm64/include/asm/kvm_hyp.h
> > +++ b/arch/arm64/include/asm/kvm_hyp.h
> > @@ -143,5 +143,6 @@ extern u64 kvm_nvhe_sym(id_aa64smfr0_el1_sys_val);
> >
> >  extern unsigned long kvm_nvhe_sym(__icache_flags);
> >  extern unsigned int kvm_nvhe_sym(kvm_arm_vmid_bits);
> > +extern unsigned int kvm_nvhe_sym(kvm_host_sve_max_vl);
> >
> >  #endif /* __ARM64_KVM_HYP_H__ */
> > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > index 9996a989b52e..9e565ea3d645 100644
> > --- a/arch/arm64/kvm/arm.c
> > +++ b/arch/arm64/kvm/arm.c
> > @@ -2378,6 +2378,7 @@ static void kvm_hyp_init_symbols(void)
> >       kvm_nvhe_sym(id_aa64smfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64SMFR0_EL1);
> >       kvm_nvhe_sym(__icache_flags) = __icache_flags;
> >       kvm_nvhe_sym(kvm_arm_vmid_bits) = kvm_arm_vmid_bits;
> > +     kvm_nvhe_sym(kvm_host_sve_max_vl) = kvm_host_sve_max_vl;
> >  }
> >
> >  static int __init kvm_hyp_init_protection(u32 hyp_va_bits)
> > diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c
> > index 16aa4875ddb8..25e9a94f6d76 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c
> > +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c
> > @@ -18,6 +18,8 @@ unsigned long __icache_flags;
> >  /* Used by kvm_get_vttbr(). */
> >  unsigned int kvm_arm_vmid_bits;
> >
> > +unsigned int kvm_host_sve_max_vl;
> > +
> >  /*
> >   * Set trap register values based on features in ID_AA64PFR0.
> >   */
> > diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
> > index 1b7b58cb121f..e818727900ec 100644
> > --- a/arch/arm64/kvm/reset.c
> > +++ b/arch/arm64/kvm/reset.c
> > @@ -46,11 +46,13 @@ static u32 __ro_after_init kvm_ipa_limit;
> >                                PSR_AA32_I_BIT | PSR_AA32_F_BIT)
> >
> >  unsigned int __ro_after_init kvm_sve_max_vl;
> > +unsigned int __ro_after_init kvm_host_sve_max_vl;
> >
> >  int __init kvm_arm_init_sve(void)
> >  {
> >       if (system_supports_sve()) {
> >               kvm_sve_max_vl = sve_max_virtualisable_vl();
> > +             kvm_host_sve_max_vl = sve_max_vl();
>
> Why the intermediate storage?

It's not needed for this patch, but rather for the one that allocates
the memory for the host sve state, to be able to reuse the same
functions at el1 and hyp, e.g., pkvm_host_sve_state_size() . I'll move
it to where it's used or find a way of getting rid of it altogether.

> Setting kvm_nvhe_sym(kvm_host_sve_max_vl) to sve_max_vl() should be
> enough. But it doesn't mean that all the CPUs support the same max VL,
> and I wonder whether we end-up with the wrong in-memory layout in this
> case...

Looking at how the value behind sve_max_vl() is calculated
(vl_info[ARM64_VEC_SVE].max_vl;): it seems to store the maximum VL in
common to all CPUs, which in turn becomes a limit to the usable VLs in
the system as a whole.

So for example PR_SVE_SET_VL (sve_set_current_vl()) eventually comes
down to find_supported_vector_length(), which limits the VL set to the
lengths supported in vl_info[ARM64_VEC_SVE], which are system-wide
rather than per cpu.

So if my understanding is correct, I don't think we need to allocate
more memory than sve_max_vl() to store the host state. It shouldn't be
a problem in terms of layout either, as long as we consistently use it
for save and restore. Similar to how kvm_arch_vcpu_put_fp() always
uses the vcpu_sve_max_vq(). It does need to be documented, as you
mention in a later patch.

Thanks,
/fuad




>
> Thanks,
>
>         M.
>
> --
> Without deviation from the norm, progress is not possible.



More information about the linux-arm-kernel mailing list