[PATCH v7 5/6] KVM: x86: Refactor tsc synchronization code
Sean Christopherson
seanjc at google.com
Thu Sep 2 12:21:41 PDT 2021
On Mon, Aug 16, 2021, Oliver Upton wrote:
> Refactor kvm_synchronize_tsc to make a new function that allows callers
> to specify TSC parameters (offset, value, nanoseconds, etc.) explicitly
> for the sake of participating in TSC synchronization.
>
> Signed-off-by: Oliver Upton <oupton at google.com>
> ---
> + struct kvm *kvm = vcpu->kvm;
> + bool already_matched;
> +
> + lockdep_assert_held(&kvm->arch.tsc_write_lock);
> +
> + already_matched =
> + (vcpu->arch.this_tsc_generation == kvm->arch.cur_tsc_generation);
> +
...
> + if (!matched) {
> + /*
> + * We split periods of matched TSC writes into generations.
> + * For each generation, we track the original measured
> + * nanosecond time, offset, and write, so if TSCs are in
> + * sync, we can match exact offset, and if not, we can match
> + * exact software computation in compute_guest_tsc()
> + *
> + * These values are tracked in kvm->arch.cur_xxx variables.
> + */
> + kvm->arch.cur_tsc_generation++;
> + kvm->arch.cur_tsc_nsec = ns;
> + kvm->arch.cur_tsc_write = tsc;
> + kvm->arch.cur_tsc_offset = offset;
> +
> + spin_lock(&kvm->arch.pvclock_gtod_sync_lock);
> + kvm->arch.nr_vcpus_matched_tsc = 0;
> + } else if (!already_matched) {
> + spin_lock(&kvm->arch.pvclock_gtod_sync_lock);
> + kvm->arch.nr_vcpus_matched_tsc++;
> + }
> +
> + kvm_track_tsc_matching(vcpu);
> + spin_unlock(&kvm->arch.pvclock_gtod_sync_lock);
This unlock is imbalanced if matched and already_matched are both true. It's not
immediately obvious that that _can't_ happen, and if it truly can't happen then
conditionally locking is pointless (because it's not actually conditional).
The previous code took the lock unconditionally, I don't see a strong argument
to change that, e.g. holding it for a few extra cycles while kvm->arch.cur_tsc_*
are updated is unlikely to be noticable.
If you really want to delay taking the locking, you could do
if (!matched) {
kvm->arch.cur_tsc_generation++;
kvm->arch.cur_tsc_nsec = ns;
kvm->arch.cur_tsc_write = data;
kvm->arch.cur_tsc_offset = offset;
}
spin_lock(&kvm->arch.pvclock_gtod_sync_lock);
if (!matched)
kvm->arch.nr_vcpus_matched_tsc = 0;
else if (!already_matched)
kvm->arch.nr_vcpus_matched_tsc++;
spin_unlock(&kvm->arch.pvclock_gtod_sync_lock);
or if you want to get greedy
if (!matched || !already_matched) {
spin_lock(&kvm->arch.pvclock_gtod_sync_lock);
if (!matched)
kvm->arch.nr_vcpus_matched_tsc = 0;
else
kvm->arch.nr_vcpus_matched_tsc++;
spin_unlock(&kvm->arch.pvclock_gtod_sync_lock);
}
Though I'm not sure the minor complexity is worth avoiding spinlock contention.
More information about the linux-arm-kernel
mailing list