[PATCH v4 09/18] KVM: arm64: selftests: Add guest support to get the vcpuid

Andrew Jones drjones at redhat.com
Mon Sep 13 00:25:45 PDT 2021


On Fri, Sep 10, 2021 at 11:03:58AM -0700, Raghavendra Rao Ananta wrote:
> On Fri, Sep 10, 2021 at 1:10 AM Andrew Jones <drjones at redhat.com> wrote:
> >
> > On Thu, Sep 09, 2021 at 10:10:56AM -0700, Raghavendra Rao Ananta wrote:
> > > On Thu, Sep 9, 2021 at 12:56 AM Andrew Jones <drjones at redhat.com> wrote:
> > > >
> > > > On Thu, Sep 09, 2021 at 01:38:09AM +0000, Raghavendra Rao Ananta wrote:
> > ...
> > > > > +     for (i = 0; i < KVM_MAX_VCPUS; i++) {
> > > > > +             vcpuid = vcpuid_map[i].vcpuid;
> > > > > +             GUEST_ASSERT_1(vcpuid != VM_VCPUID_MAP_INVAL, mpidr);
> > > >
> > > > We don't want this assert if it's possible to have sparse maps, which
> > > > it probably isn't ever going to be, but...
> > > >
> > > If you look at the way the array is arranged, the element with
> > > VM_VCPUID_MAP_INVAL acts as a sentinel for us and all the proper
> > > elements would lie before this. So, I don't think we'd have a sparse
> > > array here.
> >
> > If we switch to my suggestion of adding map entries at vcpu-add time and
> > removing them at vcpu-rm time, then the array may become sparse depending
> > on the order of removals.
> >
> Oh, I get it now. But like you mentioned, we add entries to the map
> while the vCPUs are getting added and then sync_global_to_guest()
> later. This seems like a lot of maintainance, unless I'm interpreting
> it wrong or not seeing an advantage.

The advantage is that you don't need to create all vcpus before calling
the map init function. While it's true that we'll still require a call
after adding all vcpus if we want to export the map to the guest, i.e.
sync_global_to_guest, we'll never have to worry about the map being
out of synch wrt vcpus on the host side, and there's no need to call
sync_global_to_guest at all when the test needs the map, but the guest
doesn't need to access it.

> I like your idea of coming up an arch-independent interface, however.
> So I modified it similar to the familiar ucall interface that we have
> and does everything in one shot to avoid any confusion:

Right, ucall_init does call sync_global_to_guest, but it's the only
lib function so far. Everything else exported to the guest must be
done explicitly.

> 
> diff --git a/tools/testing/selftests/kvm/include/kvm_util.h
> b/tools/testing/selftests/kvm/include/kvm_util.h
> index 010b59b13917..0e87cb0c980b 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util.h
> @@ -400,4 +400,24 @@ uint64_t get_ucall(struct kvm_vm *vm, uint32_t
> vcpu_id, struct ucall *uc);
>  int vm_get_stats_fd(struct kvm_vm *vm);
>  int vcpu_get_stats_fd(struct kvm_vm *vm, uint32_t vcpuid);
> 
> +#define VM_CPUID_MAP_INVAL -1
> +
> +struct vm_cpuid_map {
> +       uint64_t hw_cpuid;
> +       int vcpuid;
> +};
> +
> +/*
> + * Create a vcpuid:hw_cpuid map and export it to the guest
> + *
> + * Input Args:
> + *   vm - KVM VM.
> + *
> + * Output Args: None
> + *
> + * Must be called after all the vCPUs are added to the VM
> + */
> +void vm_cpuid_map_init(struct kvm_vm *vm);
> +int guest_get_vcpuid(void);
> +
>  #endif /* SELFTEST_KVM_UTIL_H */
> diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c
> b/tools/testing/selftests/kvm/lib/aarch64/processor.c
> index db64ee206064..e796bb3984a6 100644
> --- a/tools/testing/selftests/kvm/lib/aarch64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c
> @@ -16,6 +16,8 @@
> 
>  static vm_vaddr_t exception_handlers;
> 
> +static struct vm_cpuid_map cpuid_map[KVM_MAX_VCPUS];
> +
>  static uint64_t page_align(struct kvm_vm *vm, uint64_t v)
>  {
>         return (v + vm->page_size) & ~(vm->page_size - 1);
> @@ -426,3 +428,42 @@ void vm_install_exception_handler(struct kvm_vm
> *vm, int vector,
>         assert(vector < VECTOR_NUM);
>         handlers->exception_handlers[vector][0] = handler;
>  }
> +
> +void vm_cpuid_map_init(struct kvm_vm *vm)
> +{
> +       int i = 0;
> +       struct vcpu *vcpu;
> +       struct vm_cpuid_map *map;
> +
> +       TEST_ASSERT(!list_empty(&vm->vcpus), "vCPUs must have been created\n");
> +
> +       list_for_each_entry(vcpu, &vm->vcpus, list) {
> +               map = &cpuid_map[i++];
> +               map->vcpuid = vcpu->id;
> +               get_reg(vm, vcpu->id,
> KVM_ARM64_SYS_REG(SYS_MPIDR_EL1), &map->hw_cpuid);
> +               map->hw_cpuid &= MPIDR_HWID_BITMASK;
> +       }
> +
> +       if (i < KVM_MAX_VCPUS)
> +               cpuid_map[i].vcpuid = VM_CPUID_MAP_INVAL;
> +
> +       sync_global_to_guest(vm, cpuid_map);
> +}
> +
> +int guest_get_vcpuid(void)
> +{
> +       int i, vcpuid;
> +       uint64_t mpidr = read_sysreg(mpidr_el1) & MPIDR_HWID_BITMASK;
> +
> +       for (i = 0; i < KVM_MAX_VCPUS; i++) {
> +               vcpuid = cpuid_map[i].vcpuid;
> +
> +               /* Was this vCPU added to the VM after the map was
> initialized? */
> +               GUEST_ASSERT_1(vcpuid != VM_CPUID_MAP_INVAL, mpidr);
> +
> +               if (mpidr == cpuid_map[i].hw_cpuid)
> +                       return vcpuid;
> +       }
> +
> +       /* We should not be reaching here */
> +       GUEST_ASSERT_1(0, mpidr);
> +       return -1;
> +}
> 
> This would ensure that we don't have a sparse array and can use the
> last non-vCPU element as a sentinal node.
> If you still feel preparing the map as and when the vCPUs are created
> makes more sense, I can go for it.

Yup, I think that's still my preference. We don't really need a
sentinel node for such a small array. We can just do

static struct vm_cpuid_map cpuid_map[KVM_MAX_VCPUS] = { [0 ... KVM_MAX_VCPUS - 1] = VM_CPUID_MAP_INVAL };

to ensure all invalid nodes are invalid. After a full loop
if we didn't find a valid entry, then we assert, which easily
supports a sparse array.

Also, please don't forget that guest_get_vcpuid() can be common for all
architectures. We just need an arch-specific call for get_hw_cpuid().

Thanks,
drew

> 
> Regards,
> Raghavendra
> > Thanks,
> > drew
> >
> 




More information about the linux-arm-kernel mailing list