[RFC PATCH 1/4] ARM: KVM: on unhandled IO mem abort, route the call to the KVM MMIO bus

Eric Auger eric.auger at linaro.org
Thu Nov 13 06:23:05 PST 2014


On 11/13/2014 03:16 PM, Eric Auger wrote:
> On 11/13/2014 11:45 AM, Nikolay Nikolaev wrote:
>> On Mon, Nov 10, 2014 at 6:27 PM, Christoffer Dall
>> <christoffer.dall at linaro.org> wrote:
>>> On Mon, Nov 10, 2014 at 05:09:07PM +0200, Nikolay Nikolaev wrote:
>>>> Hello,
>>>>
>>>> On Fri, Mar 28, 2014 at 9:09 PM, Christoffer Dall
>>>> <christoffer.dall at linaro.org> wrote:
>>>>>
>>>>> On Thu, Mar 13, 2014 at 04:57:26PM +0100, Antonios Motakis wrote:
>>>>>> On an unhandled IO memory abort, use the kvm_io_bus_* API in order to
>>>>>> handle the MMIO access through any registered read/write callbacks. This
>>>>>> is a dependency for eventfd support (ioeventfd and irqfd).
>>>>>>
>>>>>> However, accesses to the VGIC are still left implemented independently,
>>>>>> since the kvm_io_bus_* API doesn't pass the VCPU pointer doing the access.
>>>>>>
>>>>>> Signed-off-by: Antonios Motakis <a.motakis at virtualopensystems.com>
>>>>>> Signed-off-by: Nikolay Nikolaev <n.nikolaev at virtualopensystems.com>
>>>>>> ---
>>>>>>  arch/arm/kvm/mmio.c | 32 ++++++++++++++++++++++++++++++++
>>>>>>  virt/kvm/arm/vgic.c |  5 ++++-
>>>>>>  2 files changed, 36 insertions(+), 1 deletion(-)
>>>>>>
>>>>>> diff --git a/arch/arm/kvm/mmio.c b/arch/arm/kvm/mmio.c
>>>>>> index 4cb5a93..1d17831 100644
>>>>>> --- a/arch/arm/kvm/mmio.c
>>>>>> +++ b/arch/arm/kvm/mmio.c
>>>>>> @@ -162,6 +162,35 @@ static int decode_hsr(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>>>>>>       return 0;
>>>>>>  }
>>>>>>
>>>>>> +/**
>>>>>> + * kvm_handle_mmio - handle an in-kernel MMIO access
>>>>>> + * @vcpu:    pointer to the vcpu performing the access
>>>>>> + * @run:     pointer to the kvm_run structure
>>>>>> + * @mmio:    pointer to the data describing the access
>>>>>> + *
>>>>>> + * returns true if the MMIO access has been performed in kernel space,
>>>>>> + * and false if it needs to be emulated in user space.
>>>>>> + */
>>>>>> +static bool handle_kernel_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
>>>>>> +             struct kvm_exit_mmio *mmio)
>>>>>> +{
>>>>>> +     int ret;
>>>>>> +     if (mmio->is_write) {
>>>>>> +             ret = kvm_io_bus_write(vcpu->kvm, KVM_MMIO_BUS, mmio->phys_addr,
>>>>>> +                             mmio->len, &mmio->data);
>>>>>> +
>>>>>> +     } else {
>>>>>> +             ret = kvm_io_bus_read(vcpu->kvm, KVM_MMIO_BUS, mmio->phys_addr,
>>>>>> +                             mmio->len, &mmio->data);
>>>>>> +     }
>>>>>> +     if (!ret) {
>>>>>> +             kvm_prepare_mmio(run, mmio);
>>>>>> +             kvm_handle_mmio_return(vcpu, run);
>>>>>> +     }
>>>>>> +
>>>>>> +     return !ret;
>>>>>> +}
>>>>>> +
>>>>>>  int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
>>>>>>                phys_addr_t fault_ipa)
>>>>>>  {
>>>>>> @@ -200,6 +229,9 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
>>>>>>       if (vgic_handle_mmio(vcpu, run, &mmio))
>>>>>>               return 1;
>>>>>>
>>>>>> +     if (handle_kernel_mmio(vcpu, run, &mmio))
>>>>>> +             return 1;
>>>>>> +
>>>>
>>>>
>>>> We're reconsidering ioeventfds patchseries and we tried to evaluate
>>>> what you suggested here.
>>>>
>>>>>
>>>>> this special-casing of the vgic is now really terrible.  Is there
>>>>> anything holding you back from doing the necessary restructure of the
>>>>> kvm_bus_io_*() API instead?
>>>>
>>>> Restructuring the kvm_io_bus_ API is not a big thing (we actually did
>>>> it), but is not directly related to the these patches.
>>>> Of course it can be justified if we do it in the context of removing
>>>> vgic_handle_mmio and leaving only handle_kernel_mmio.
>>>>
>>>>>
>>>>> That would allow us to get rid of the ugly
>>>>> Fix it! in the vgic driver as well.
>>>>
>>>> Going through the vgic_handle_mmio we see that it will require large
>>>> refactoring:
>>>>  - there are 15 MMIO ranges for the vgic now - each should be
>>>> registered as a separate device
Re-correcting Andre's address, sorry:
Hi Nikolay, Andre,

what does mandate to register 15 devices? Isn't possible to register a
single kvm_io_device covering the whole distributor range [base, base +
KVM_VGIC_V2_DIST_SIZE] (current code) and in associated
kvm_io_device_ops read/write locate the addressed range and do the same
as what is done in current vgic_handle_mmio? Isn't it done that way for
the ioapic? what do I miss?

Thanks

Best Regards

Eric
>>>>  - the handler of each range should be split into read and write
>>>>  - all handlers take 'struct kvm_exit_mmio', and pass it to
>>>> 'vgic_reg_access', 'mmio_data_read' and 'mmio_data_read'
>>>>
>>>> To sum up - if we do this refactoring of vgic's MMIO handling +
>>>> kvm_io_bus_ API getting 'vcpu" argument we'll get a 'much' cleaner
>>>> vgic code and as a bonus we'll get 'ioeventfd' capabilities.
>>>>
>>>> We have 3 questions:
>>>>  - is the kvm_io_bus_ getting 'vcpu' argument acceptable for the other
>>>> architectures too?
>>>>  - is this huge vgic MMIO handling redesign acceptable/desired (it
>>>> touches a lot of code)?
>>>>  - is there a way that ioeventfd is accepted leaving vgic.c in it's
>>>> current state?
>>>>
>>> Not sure how the latter question is relevant to this, but check with
>>> Andre who recently looked at this as well and decided that for GICv3 the
>>> only sane thing was to remove that comment for the gic.
>> @Andre - what's your experience with the GICv3 and MMIO handling,
>> anything specific?
>>>
>>> I don't recall the details of what you were trying to accomplish here
>>> (it's been 8 months or so) but the surely the vgic handling code should
>>> *somehow* be integrated into the handle_kernel_mmio (like Paolo
>>> suggested), unless you come back and tell me that that would involve a
>>> complete rewrite of the vgic code.
>> I'm experimenting now - it's not exactly rewrite of whole vgic code,
>> but it will touch a lot of it  - all MMIO access handlers and the
>> supporting functions.
>> We're ready to spend the effort. My question is  - is this acceptable?
>>
>> regards,
>> Nikolay Nikolaev
>> Virtual Open Systems
>>>
>>> -Christoffer
>> _______________________________________________
>> kvmarm mailing list
>> kvmarm at lists.cs.columbia.edu
>> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
>>
> 




More information about the linux-arm-kernel mailing list