[PATCH v2 7/7] KVM: arm64: Consolidate initializing the host data's fpsimd_state/sve in pKVM
Marc Zyngier
maz at kernel.org
Tue May 21 15:55:55 PDT 2024
On Tue, 21 May 2024 17:37:20 +0100,
Fuad Tabba <tabba at google.com> wrote:
>
> Now that we have introduced finalize_init_hyp_mode(), lets
> consolidate the initializing of the host_data fpsimd_state and
> sve state.
>
> Signed-off-by: Fuad Tabba <tabba at google.com>
> ---
> arch/arm64/include/asm/kvm_host.h | 10 ++++++++--
> arch/arm64/kvm/arm.c | 18 ++++++++++++------
> arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 1 -
> arch/arm64/kvm/hyp/nvhe/pkvm.c | 11 -----------
> arch/arm64/kvm/hyp/nvhe/setup.c | 1 -
> 5 files changed, 20 insertions(+), 21 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 7b3745ef1d73..8a170f314498 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -536,8 +536,14 @@ struct kvm_cpu_context {
> struct kvm_host_data {
> struct kvm_cpu_context host_ctxt;
>
> - struct user_fpsimd_state *fpsimd_state; /* hyp VA */
> - struct user_sve_state *sve_state; /* hyp VA */
> + /*
> + * All pointers in this union are hyp VA.
> + * sve_state is only used in pKVM and if system_supports_sve().
> + */
> + union {
> + struct user_fpsimd_state *fpsimd_state;
> + struct user_sve_state *sve_state;
> + };
>
> /* Ownership of the FP regs */
> enum {
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index a9b1b0e9c319..a1c7e0ad6951 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -2445,14 +2445,20 @@ static void finalize_init_hyp_mode(void)
> {
> int cpu;
>
> - if (!is_protected_kvm_enabled() || !system_supports_sve())
> - return;
> -
> for_each_possible_cpu(cpu) {
> - struct user_sve_state *sve_state;
> + if (system_supports_sve() && is_protected_kvm_enabled()) {
> + struct user_sve_state *sve_state;
>
> - sve_state = per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->sve_state;
> - per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->sve_state = kern_hyp_va(sve_state);
> + sve_state = per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->sve_state;
> + per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->sve_state =
> + kern_hyp_va(sve_state);
> + } else {
> + struct user_fpsimd_state *fpsimd_state;
> +
> + fpsimd_state = &per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->host_ctxt.fp_regs;
> + per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->fpsimd_state =
> + kern_hyp_va(fpsimd_state);
> + }
nit: SVE support and protected state do not change on a per CPU basis,
so checking for these inside the loop is pretty counter intuitive.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
More information about the linux-arm-kernel
mailing list