[PATCH v6 37/44] KVM: VMX: Dedup code for removing MSR from VMCS's auto-load list

Mi, Dapeng dapeng1.mi at linux.intel.com
Mon Dec 8 01:29:53 PST 2025


On 12/6/2025 8:17 AM, Sean Christopherson wrote:
> Add a helper to remove an MSR from an auto-{load,store} list to dedup the
> msr_autoload code, and in anticipation of adding similar functionality for
> msr_autostore.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <seanjc at google.com>
> ---
>  arch/x86/kvm/vmx/vmx.c | 31 ++++++++++++++++---------------
>  1 file changed, 16 insertions(+), 15 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 52bcb817cc15..a51f66d1b201 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -1040,9 +1040,22 @@ static int vmx_find_loadstore_msr_slot(struct vmx_msrs *m, u32 msr)
>  	return -ENOENT;
>  }
>  
> +static void vmx_remove_auto_msr(struct vmx_msrs *m, u32 msr,
> +				unsigned long vmcs_count_field)
> +{
> +	int i;
> +
> +	i = vmx_find_loadstore_msr_slot(m, msr);
> +	if (i < 0)
> +		return;
> +
> +	--m->nr;
> +	m->val[i] = m->val[m->nr];

Sometimes the order of MSR writing does matter, e.g., PERF_GLOBAL_CTRL MSR
should be written at last after all PMU MSR writing. So directly moving the
last MSR entry into cleared one could break the MSR writing sequence and
may cause issue in theory.

I know this won't really cause issue since currently vPMU won't use the MSR
auto-load feature to save any PMU MSR, but it's still unsafe for future uses. 

I'm not sure if it's worthy to do the strict MSR entry shift right now.
Perhaps we could add a message to warn users at least.

Thanks.


> +	vmcs_write32(vmcs_count_field, m->nr);
> +}
> +
>  static void clear_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr)
>  {
> -	int i;
>  	struct msr_autoload *m = &vmx->msr_autoload;
>  
>  	switch (msr) {
> @@ -1063,21 +1076,9 @@ static void clear_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr)
>  		}
>  		break;
>  	}
> -	i = vmx_find_loadstore_msr_slot(&m->guest, msr);
> -	if (i < 0)
> -		goto skip_guest;
> -	--m->guest.nr;
> -	m->guest.val[i] = m->guest.val[m->guest.nr];
> -	vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, m->guest.nr);
>  
> -skip_guest:
> -	i = vmx_find_loadstore_msr_slot(&m->host, msr);
> -	if (i < 0)
> -		return;
> -
> -	--m->host.nr;
> -	m->host.val[i] = m->host.val[m->host.nr];
> -	vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, m->host.nr);
> +	vmx_remove_auto_msr(&m->guest, msr, VM_ENTRY_MSR_LOAD_COUNT);
> +	vmx_remove_auto_msr(&m->host, msr, VM_EXIT_MSR_LOAD_COUNT);
>  }
>  
>  static __always_inline void add_atomic_switch_msr_special(struct vcpu_vmx *vmx,



More information about the linux-arm-kernel mailing list