[RFC PATCH 00/34] Running Qualcomm's Gunyah Guests via KVM in EL1

David Woodhouse dwmw2 at infradead.org
Thu Jun 19 08:50:29 PDT 2025


On Thu, 2025-04-24 at 17:57 +0100, Marc Zyngier wrote:
> On Thu, 24 Apr 2025 16:34:50 +0100,
> Oliver Upton <oliver.upton at linux.dev> wrote:
> > 
> > On Thu, Apr 24, 2025 at 03:13:07PM +0100, Karim Manaouil wrote:
> > > This series introduces the capability of running Gunyah guests via KVM on
> > > Qualcomm SoCs shipped with Gunyah hypervisor [1] (e.g. RB3 Gen2).
> > > 
> > > The goal of this work is to port the existing Gunyah hypervisor support from a
> > > standalone driver interface [2] to KVM, with the aim of leveraging as much of the
> > > existing KVM infrastructure as possible to reduce duplication of effort around
> > > memory management (e.g. guest_memfd), irqfd, and other core components.
> > > 
> > > In short, Gunyah is a Type-1 hypervisor, meaning that it runs independently of any
> > > high-level OS kernel such as Linux and runs in a higher CPU privilege level than VMs.
> > > Gunyah is shipped as firmware and guests typically talk with Gunyah via hypercalls.
> > > KVM is designed to run as Type-2 hypervisor. This port allows KVM to run in EL1 and
> > > serve as the interface for VM lifecycle management,while offloading virtualization
> > > to Gunyah.
> > 
> > If you're keen on running your own hypervisor then I'm sorry, you get to
> > deal with it soup to nuts. Other hypervisors (e.g. mshv) have their own
> > kernel drivers for managing the host / UAPI parts of driving VMs.
> > 
> > The KVM arch interface is *internal* to KVM, not something to be
> > (ab)used for cramming in a non-KVM hypervisor. KVM and other hypervisors
> > can still share other bits of truly common infrastructure, like
> > guest_memfd.
> > 
> > I understand the value in what you're trying to do, but if you want it
> > to smell like KVM you may as well just let the user run it at EL2.
> 
> +1. KVM is not a generic interface for random third party hypervisors.

I don't think that should be true in the general case. At least, it
depends on whether you mean the literal implementation in
arch/arm64/kvm/ vs. the userspace API and set of ioctls on /dev/kvm.

The kernel exists to provide a coherent userspace API for all kinds of
hardware. That's what it's *for*. It provides users with a consistent
interface to all kinds of network cards, serial ports, etc. — and that
includes firmware/platform features too. 

There's no reason that shouldn't be the same for virtualisation. If the
kernel cannot provide an API which supports *all* kinds of
virtualization, then it seems like we've done something wrong.

On x86 we have /dev/kvm backed by different vendor-specific support for
Intel vs. AMD. And in recent years we've retrofitted confidential
compute to it too, with SEV-SNP, TDX, etc.

We haven't resorted to saying "no, sorry, KVM doesn't support that".

We shouldn't say that for Arm either.

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 5069 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20250619/c0bf01f0/attachment.p7s>


More information about the linux-arm-kernel mailing list