[PATCH v4 04/19] arm/arm64: KVM: wrap 64 bit MMIO accesses with two 32 bit ones

Christoffer Dall christoffer.dall at linaro.org
Mon Nov 24 06:40:47 PST 2014


On Mon, Nov 24, 2014 at 01:50:27PM +0000, Andre Przywara wrote:
> Hi Christoffer,
> 
> On 23/11/14 09:42, Christoffer Dall wrote:
> > On Fri, Nov 14, 2014 at 10:07:48AM +0000, Andre Przywara wrote:
> >> Some GICv3 registers can and will be accessed as 64 bit registers.
> >> Currently the register handling code can only deal with 32 bit
> >> accesses, so we do two consecutive calls to cover this.
> >>
> >> Signed-off-by: Andre Przywara <andre.przywara at arm.com>
> >> ---
> >> Changelog v3...v4:
> >> - add comment explaining little endian handling
> >>
> >>  virt/kvm/arm/vgic.c |   51 ++++++++++++++++++++++++++++++++++++++++++++++++---
> >>  1 file changed, 48 insertions(+), 3 deletions(-)
> >>
> >> diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
> >> index 5eee3de..dba51e4 100644
> >> --- a/virt/kvm/arm/vgic.c
> >> +++ b/virt/kvm/arm/vgic.c
> >> @@ -1033,6 +1033,51 @@ static bool vgic_validate_access(const struct vgic_dist *dist,
> >>  }
> >>  
> >>  /*
> >> + * Call the respective handler function for the given range.
> >> + * We split up any 64 bit accesses into two consecutive 32 bit
> >> + * handler calls and merge the result afterwards.
> >> + * We do this in a little endian fashion regardless of the host's
> >> + * or guest's endianness, because the GIC is always LE and the rest of
> >> + * the code (vgic_reg_access) also puts it in a LE fashion already.
> >> + */
> >> +static bool call_range_handler(struct kvm_vcpu *vcpu,
> >> +			       struct kvm_exit_mmio *mmio,
> >> +			       unsigned long offset,
> >> +			       const struct mmio_range *range)
> >> +{
> >> +	u32 *data32 = (void *)mmio->data;
> >> +	struct kvm_exit_mmio mmio32;
> >> +	bool ret;
> >> +
> >> +	if (likely(mmio->len <= 4))
> >> +		return range->handle_mmio(vcpu, mmio, offset);
> >> +
> >> +	/*
> >> +	 * Any access bigger than 4 bytes (that we currently handle in KVM)
> >> +	 * is actually 8 bytes long, caused by a 64-bit access
> >> +	 */
> >> +
> >> +	mmio32.len = 4;
> >> +	mmio32.is_write = mmio->is_write;
> >> +
> >> +	mmio32.phys_addr = mmio->phys_addr + 4;
> >> +	if (mmio->is_write)
> >> +		*(u32 *)mmio32.data = data32[1];
> >> +	ret = range->handle_mmio(vcpu, &mmio32, offset + 4);
> >> +	if (!mmio->is_write)
> >> +		data32[1] = *(u32 *)mmio32.data;
> >> +
> >> +	mmio32.phys_addr = mmio->phys_addr;
> >> +	if (mmio->is_write)
> >> +		*(u32 *)mmio32.data = data32[0];
> >> +	ret |= range->handle_mmio(vcpu, &mmio32, offset);
> > 
> > nit: if handle_mmio returns multiple error codes, we will now not
> > (necessarily) be preserving either, so you may just want to do a check
> > on ret above and return early in the case of error.  Only worth it if
> > you respin anyway.
> 
> Mmh, if I read this correctly, the return value actually turns into
> updated_state, so technically I wouldn't call this an error code. I
> think we must not exit after the first half and also have to keep the
> OR-ing semantics of the two parts, right?
> 
Doh, it's a bool, forget what I said.

-Christoffer



More information about the linux-arm-kernel mailing list