[PATCH v3 07/18] KVM: arm64: timers: Allow userspace to set the global counter offset
Marc Zyngier
maz at kernel.org
Thu Mar 30 03:15:33 PDT 2023
On Thu, 30 Mar 2023 07:26:48 +0100,
Oliver Upton <oliver.upton at linux.dev> wrote:
>
> Hi Marc,
>
> On Fri, Mar 24, 2023 at 02:46:53PM +0000, Marc Zyngier wrote:
> > And this is the moment you have all been waiting for: setting the
> > counter offset from userspace.
> >
> > We expose a brand new capability that reports the ability to set
> > the offset for both the virtual and physical sides.
> >
> > In keeping with the architecture, the offset is expressed as
> > a delta that is substracted from the physical counter value.
> >
> > Once this new API is used, there is no going back, and the counters
> > cannot be written to to set the offsets implicitly (the writes
> > are instead ignored).
> >
> > Reviewed-by: Colton Lewis <coltonlewis at google.com>
> > Signed-off-by: Marc Zyngier <maz at kernel.org>
> > ---
> > arch/arm64/include/asm/kvm_host.h | 4 +++
> > arch/arm64/include/uapi/asm/kvm.h | 9 ++++++
> > arch/arm64/kvm/arch_timer.c | 46 +++++++++++++++++++++++++++----
> > arch/arm64/kvm/arm.c | 8 ++++++
> > include/uapi/linux/kvm.h | 3 ++
> > 5 files changed, 65 insertions(+), 5 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index 002a10cbade2..116233a390e9 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -221,6 +221,8 @@ struct kvm_arch {
> > #define KVM_ARCH_FLAG_EL1_32BIT 4
> > /* PSCI SYSTEM_SUSPEND enabled for the guest */
> > #define KVM_ARCH_FLAG_SYSTEM_SUSPEND_ENABLED 5
> > + /* VM counter offset */
> > +#define KVM_ARCH_FLAG_VM_COUNTER_OFFSET 6
> >
> > unsigned long flags;
> >
> > @@ -1010,6 +1012,8 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
> >
> > long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm,
> > struct kvm_arm_copy_mte_tags *copy_tags);
> > +int kvm_vm_ioctl_set_counter_offset(struct kvm *kvm,
> > + struct kvm_arm_counter_offset *offset);
> >
> > /* Guest/host FPSIMD coordination helpers */
> > int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
> > diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
> > index f8129c624b07..12fb0d8a760a 100644
> > --- a/arch/arm64/include/uapi/asm/kvm.h
> > +++ b/arch/arm64/include/uapi/asm/kvm.h
> > @@ -198,6 +198,15 @@ struct kvm_arm_copy_mte_tags {
> > __u64 reserved[2];
> > };
> >
> > +/*
> > + * Counter/Timer offset structure. Describe the virtual/physical offset.
> > + * To be used with KVM_ARM_SET_COUNTER_OFFSET.
> > + */
> > +struct kvm_arm_counter_offset {
> > + __u64 counter_offset;
> > + __u64 reserved;
> > +};
> > +
> > #define KVM_ARM_TAGS_TO_GUEST 0
> > #define KVM_ARM_TAGS_FROM_GUEST 1
> >
> > diff --git a/arch/arm64/kvm/arch_timer.c b/arch/arm64/kvm/arch_timer.c
> > index bb64a71ae193..25625e1d6d89 100644
> > --- a/arch/arm64/kvm/arch_timer.c
> > +++ b/arch/arm64/kvm/arch_timer.c
> > @@ -851,9 +851,11 @@ void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu)
> > ptimer->vcpu = vcpu;
> > ptimer->offset.vm_offset = &vcpu->kvm->arch.timer_data.poffset;
> >
> > - /* Synchronize cntvoff across all vtimers of a VM. */
> > - timer_set_offset(vtimer, kvm_phys_timer_read());
> > - timer_set_offset(ptimer, 0);
> > + /* Synchronize offsets across timers of a VM if not already provided */
> > + if (!test_bit(KVM_ARCH_FLAG_VM_COUNTER_OFFSET, &vcpu->kvm->arch.flags)) {
> > + timer_set_offset(vtimer, kvm_phys_timer_read());
> > + timer_set_offset(ptimer, 0);
> > + }
> >
> > hrtimer_init(&timer->bg_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_HARD);
> > timer->bg_timer.function = kvm_bg_timer_expire;
> > @@ -897,8 +899,11 @@ int kvm_arm_timer_set_reg(struct kvm_vcpu *vcpu, u64 regid, u64 value)
> > kvm_arm_timer_write(vcpu, timer, TIMER_REG_CTL, value);
> > break;
> > case KVM_REG_ARM_TIMER_CNT:
> > - timer = vcpu_vtimer(vcpu);
> > - timer_set_offset(timer, kvm_phys_timer_read() - value);
> > + if (!test_bit(KVM_ARCH_FLAG_VM_COUNTER_OFFSET,
> > + &vcpu->kvm->arch.flags)) {
> > + timer = vcpu_vtimer(vcpu);
> > + timer_set_offset(timer, kvm_phys_timer_read() - value);
> > + }
> > break;
> > case KVM_REG_ARM_TIMER_CVAL:
> > timer = vcpu_vtimer(vcpu);
> > @@ -908,6 +913,13 @@ int kvm_arm_timer_set_reg(struct kvm_vcpu *vcpu, u64 regid, u64 value)
> > timer = vcpu_ptimer(vcpu);
> > kvm_arm_timer_write(vcpu, timer, TIMER_REG_CTL, value);
> > break;
> > + case KVM_REG_ARM_PTIMER_CNT:
> > + if (!test_bit(KVM_ARCH_FLAG_VM_COUNTER_OFFSET,
> > + &vcpu->kvm->arch.flags)) {
> > + timer = vcpu_ptimer(vcpu);
> > + timer_set_offset(timer, kvm_phys_timer_read() - value);
> > + }
> > + break;
> > case KVM_REG_ARM_PTIMER_CVAL:
> > timer = vcpu_ptimer(vcpu);
> > kvm_arm_timer_write(vcpu, timer, TIMER_REG_CVAL, value);
> > @@ -1443,3 +1455,27 @@ int kvm_arm_timer_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
> >
> > return -ENXIO;
> > }
> > +
> > +int kvm_vm_ioctl_set_counter_offset(struct kvm *kvm,
> > + struct kvm_arm_counter_offset *offset)
> > +{
> > + if (offset->reserved)
> > + return -EINVAL;
> > +
> > + if (!lock_all_vcpus(kvm))
> > + return -EBUSY;
>
> Similar to what you had mentioned over on the lock (un)inversion series,
> doesn't this risk racing with vCPU creation w/o holding the kvm->lock?
Huh, I was under the impression that kvm_vm_ioctl() would take the
kvm->lock, but it really doesn't! I'll fix that right away.
> Alternatively you could require this ioctl to be issued before any vCPUs
> are created to avoid the need to lock them all here. But, if you see
> value in poking this at runtime then fine by me.
I think there is some value in allowing the offset adjustment to be
set at the last possible point, right before a vcpu is ran.
Thanks for the heads up!
M.
--
Without deviation from the norm, progress is not possible.
More information about the linux-arm-kernel
mailing list