[PATCH v6 38/44] KVM: VMX: Drop unused @entry_only param from add_atomic_switch_msr()
Mi, Dapeng
dapeng1.mi at linux.intel.com
Mon Dec 8 01:32:01 PST 2025
On 12/6/2025 8:17 AM, Sean Christopherson wrote:
> Drop the "on VM-Enter only" parameter from add_atomic_switch_msr() as it
> is no longer used, and for all intents and purposes was never used. The
> functionality was added, under embargo, by commit 989e3992d2ec
> ("x86/KVM/VMX: Extend add_atomic_switch_msr() to allow VMENTER only MSRs"),
> and then ripped out by commit 2f055947ae5e ("x86/kvm: Drop L1TF MSR list
> approach") just a few commits later.
>
> 2f055947ae5e x86/kvm: Drop L1TF MSR list approach
> 72c6d2db64fa x86/litf: Introduce vmx status variable
> 215af5499d9e cpu/hotplug: Online siblings when SMT control is turned on
> 390d975e0c4e x86/KVM/VMX: Use MSR save list for IA32_FLUSH_CMD if required
> 989e3992d2ec x86/KVM/VMX: Extend add_atomic_switch_msr() to allow VMENTER only MSRs
>
> Furthermore, it's extremely unlikely KVM will ever _need_ to load an MSR
> value via the auto-load lists only on VM-Enter. MSRs writes via the lists
> aren't optimized in any way, and so the only reason to use the lists
> instead of a WRMSR are for cases where the MSR _must_ be load atomically
> with respect to VM-Enter (and/or VM-Exit). While one could argue that
> command MSRs, e.g. IA32_FLUSH_CMD, "need" to be done exact at VM-Enter, in
> practice doing such flushes within a few instructons of VM-Enter is more
> than sufficient.
>
> Note, the shortlog and changelog for commit 390d975e0c4e ("x86/KVM/VMX: Use
> MSR save list for IA32_FLUSH_CMD if required") are misleading and wrong.
> That commit added MSR_IA32_FLUSH_CMD to the VM-Enter _load_ list, not the
> VM-Enter save list (which doesn't exist, only VM-Exit has a store/save
> list).
>
> Signed-off-by: Sean Christopherson <seanjc at google.com>
> ---
> arch/x86/kvm/vmx/vmx.c | 13 ++++---------
> 1 file changed, 4 insertions(+), 9 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index a51f66d1b201..38491962b2c1 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -1094,7 +1094,7 @@ static __always_inline void add_atomic_switch_msr_special(struct vcpu_vmx *vmx,
> }
>
> static void add_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr,
> - u64 guest_val, u64 host_val, bool entry_only)
> + u64 guest_val, u64 host_val)
> {
> int i, j = 0;
> struct msr_autoload *m = &vmx->msr_autoload;
> @@ -1132,8 +1132,7 @@ static void add_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr,
> }
>
> i = vmx_find_loadstore_msr_slot(&m->guest, msr);
> - if (!entry_only)
> - j = vmx_find_loadstore_msr_slot(&m->host, msr);
> + j = vmx_find_loadstore_msr_slot(&m->host, msr);
>
> if ((i < 0 && m->guest.nr == MAX_NR_LOADSTORE_MSRS) ||
> (j < 0 && m->host.nr == MAX_NR_LOADSTORE_MSRS)) {
> @@ -1148,9 +1147,6 @@ static void add_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr,
> m->guest.val[i].index = msr;
> m->guest.val[i].value = guest_val;
>
> - if (entry_only)
> - return;
> -
> if (j < 0) {
> j = m->host.nr++;
> vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, m->host.nr);
> @@ -1190,8 +1186,7 @@ static bool update_transition_efer(struct vcpu_vmx *vmx)
> if (!(guest_efer & EFER_LMA))
> guest_efer &= ~EFER_LME;
> if (guest_efer != kvm_host.efer)
> - add_atomic_switch_msr(vmx, MSR_EFER,
> - guest_efer, kvm_host.efer, false);
> + add_atomic_switch_msr(vmx, MSR_EFER, guest_efer, kvm_host.efer);
> else
> clear_atomic_switch_msr(vmx, MSR_EFER);
> return false;
> @@ -7350,7 +7345,7 @@ static void atomic_switch_perf_msrs(struct vcpu_vmx *vmx)
> clear_atomic_switch_msr(vmx, msrs[i].msr);
> else
> add_atomic_switch_msr(vmx, msrs[i].msr, msrs[i].guest,
> - msrs[i].host, false);
> + msrs[i].host);
> }
>
> static void vmx_update_hv_timer(struct kvm_vcpu *vcpu, bool force_immediate_exit)
LGTM.
Reviewed-by: Dapeng Mi <dapeng1.mi at linux.intel.com>
More information about the linux-arm-kernel
mailing list