[RFC PATCH v1 0/4] KVM: arm64: Add VLPI migration support on GICv4.1

Shenming Lu lushenming at huawei.com
Mon Nov 23 01:54:06 EST 2020


In GICv4.1, migration has been supported except for (directly-injected)
VLPI. And GICv4.1 spec explicitly gives a way to get the VLPI's pending
state (which was crucially missing in GICv4.0). So we make VLPI migration
capable on GICv4.1 in this patch set.

In order to support VLPI migration, we need to save and restore all
required configuration information and pending states of VLPIs. But
in fact, the configuration information of VLPIs has already been saved
(or will be reallocated on the dst host...) in vgic(kvm) migration.
So we only have to migrate the pending states of VLPIs specially.

Below is the related workflow in migration.

On the save path:
	In migration completion:
		pause all vCPUs
				|
		call each VM state change handler:
			pause other devices (just keep from sending interrupts, and
			such as VFIO migration protocol has already realized it [1])
					|
			flush ITS tables into guest RAM
					|
			flush RDIST pending tables (also flush VLPI state here)
				|
		...
On the resume path:
	load each device's state:
		restore ITS tables (include pending tables) from guest RAM
				|
		for other (PCI) devices (paused), if configured to have VLPIs,
		establish the forwarding paths of their VLPIs (and transfer
		the pending states from kvm's vgic to VPT here)

Yet TODO:
 - For some reason, such as for VFIO PCI devices, there may be repeated
   resettings of HW VLPI configuration in load_state, resulting in the
   loss of pending state. A very intuitive solution is to retrieve the
   pending state in unset_forwarding (and this should be so regardless
   of migration). But at normal run time, this function may be called
   when all devices are running, in which case the unmapping of VPE is
   not allowed. It seems to be an almost insoluble bug...
   There are other possible solutions as follows:
   1) avoid unset_forwarding being called from QEMU in resuming (simply
   allocate all needed vectors first), which is more reasonable and
   efficient.
   2) add a new dedicated interface to transfer these pending states to
   HW in GIC VM state change handler corresponding to save_pending_tables.
   ...

Any comments and suggestions are very welcome.

Besides, we have tested this series in VFIO migration, and nothing else
goes wrong (with two issues committed [2][3]).

Links:
[1] vfio: UAPI for migration interface for device state:
    https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/
    commit/?id=a8a24f3f6e38103b77cf399c38eb54e1219d00d6
[2] vfio: Move the saving of the config space to the right place in VFIO migration:
    https://patchwork.ozlabs.org/patch/1400246/
[3] vfio: Set the priority of VFIO VM state change handler explicitly:
    https://patchwork.ozlabs.org/patch/1401280/

Shenming Lu (2):
  KVM: arm64: GICv4.1: Try to save hw pending state in
    save_pending_tables
  KVM: arm64: GICv4.1: Give a chance to save VLPI's pending state

Zenghui Yu (2):
  irqchip/gic-v4.1: Plumb get_irqchip_state VLPI callback
  KVM: arm64: GICv4.1: Restore VLPI's pending state to physical side

 .../virt/kvm/devices/arm-vgic-its.rst         |  2 +-
 arch/arm64/kvm/vgic/vgic-its.c                |  6 +-
 arch/arm64/kvm/vgic/vgic-v3.c                 | 62 +++++++++++++++++--
 arch/arm64/kvm/vgic/vgic-v4.c                 | 12 ++++
 drivers/irqchip/irq-gic-v3-its.c              | 38 ++++++++++++
 5 files changed, 110 insertions(+), 10 deletions(-)

-- 
2.23.0




More information about the linux-arm-kernel mailing list