[PATCH v2 00/11] Drivers for gunyah hypervisor

Elliot Berman quic_eberman at quicinc.com
Mon Aug 8 16:38:31 PDT 2022



On 8/2/2022 2:24 AM, Dmitry Baryshkov wrote:
> On 02/08/2022 00:12, Elliot Berman wrote:
>> Gunyah is a Type-1 hypervisor independent of any
>> high-level OS kernel, and runs in a higher CPU privilege level. It does
>> not depend on any lower-privileged OS kernel/code for its core
>> functionality. This increases its security and can support a much smaller
>> trusted computing base than a Type-2 hypervisor.
>>
>> Gunyah is an open source hypervisor. The source repo is available at
>> https://github.com/quic/gunyah-hypervisor.
>>
>> The diagram below shows the architecture.
>>
>> ::
>>
>>          Primary VM           Secondary VMs
> 
> Is there any significant difference between Primary VM and other VMs?
> 

The primary VM is started by RM. Secondary VMs are not otherwise special 
except that they are (usually) launched by the primary VM.

>>       +-----+ +-----+  | +-----+ +-----+ +-----+
>>       |     | |     |  | |     | |     | |     |
>>   EL0 | APP | | APP |  | | APP | | APP | | APP |
>>       |     | |     |  | |     | |     | |     |
>>       +-----+ +-----+  | +-----+ +-----+ +-----+
>>   ---------------------|-------------------------
>>       +--------------+ | +----------------------+
>>       |              | | |                      |
>>   EL1 | Linux Kernel | | |Linux kernel/Other OS |   ...
>>       |              | | |                      |
>>       +--------------+ | +----------------------+
>>   --------hvc/smc------|------hvc/smc------------
>>       +----------------------------------------+
>>       |                                        |
>>   EL2 |            Gunyah Hypervisor           |
>>       |                                        |
>>       +----------------------------------------+
>>
>> Gunyah provides these following features.
>>
>> - Threads and Scheduling: The scheduler schedules virtual CPUs (VCPUs) on
>> physical CPUs and enables time-sharing of the CPUs.
> 
> Is the scheduling provided behind the back of the OS or does it require 
> cooperation?
> 

Gunyah supports both of these scheduling models. For instance, 
scheduling of resource manager and the primary VM are done by Gunyah 
itself. A VM that the primary VM launches could be scheduled by the 
primary VM itself (by making a hypercall requesting a vCPU be switched 
in), or by Gunyah itself. We've been calling the former "proxy 
scheduling" and this would be the default behavior of VMs.

>> - Memory Management: Gunyah tracks memory ownership and use of all memory
>> under its control. Memory partitioning between VMs is a fundamental
>> security feature.
>> - Interrupt Virtualization: All interrupts are handled in the hypervisor
>> and routed to the assigned VM.
>> - Inter-VM Communication: There are several different mechanisms provided
>> for communicating between VMs.
>> - Device Virtualization: Para-virtualization of devices is supported 
>> using
>> inter-VM communication. Low level system features and devices such as
>> interrupt controllers are supported with emulation where required.
> 
> After reviewing some of the patches from the series, I'd like to 
> understand, what does it provide (and can be provided) to the VMs.
> > I'd like to understand it first, before going deep into the API issues.
> 
> 1) The hypervisor provides message queues, doorbells and vCPUs
> 
> Each of resources has it's own capability ID.
> Why is it called capability? Is it just a misname for the resource ID, 
> or has it any other meaning behind? If it is a capability, who is 
> capable of what?
> 

We are following Gunyah's naming convention here. For each virtual 
machine, Gunyah maintains a table of resources which can be accessed by 
that VM. An entry in this table is called a "capability" and VMs can 
only access resources via this capability table. Hence, they get called 
"capability IDs" and not "resource IDs". A VM can have multiple 
capability IDs mapping to the same resource. If 2 VMs have access to the 
same resource, they may not be using the same capability ID to access 
that resource since the tables are independent per VM.

> At this moment you create allocate two message queues with fixed IDs for 
> communication with resource manager. Then you use these message queues 
> to organize a console and a pack of tty devices.
> 
> What other kinds of services does RM provide to the guest OS?
> Do you expect any other drivers to be calling into the RM?
> 

I want to establish the framework to build a VM loader for Gunyah. 
Internally, we are working with a prototype of a "generic VM loader" 
which works with crosvm [1]. In this generic VM loader, memory sharing, 
memory lending, cooperative scheduling, and raising virtual interrupts 
are all supported. Emulating virtio devices in userspace is supported in 
a way which feels very similar to KVM. Our internal VM loader uses an 
IOCTL interface which is similar to KVM's.

> What is the usecase for the doorbells? Who provides doorbells? >

The basic use case I'll start with is for userspace to create an IRQFD. 
Userspace can use the IRQFD to raise a doorbell (interrupt) on the other VM.

> You mentioned that the RM generates DT overlays. What kind of 
> information goes to the overlay?
> 

The info is described in 
Documentation/devicetree/bindings/firmware/gunyah-hypervisor.yaml.

> My current impression of this series is that you have misused the 
> concept of devices. Rather than exporting MSGQs and BELLs as 
> gunyah_devices and then using them from other drivers, I'd suggest 
> turning them into resources provided by the gunyah driver core. I 
> mentioned using the mailbox API for this. Another subsystem that might 
> ring the bell for you is the remoteproc, especially the rproc_subdev. >

I had an offline discussion with Bjorn and he agreed with this approach 
here. He suggested avoiding using the device bus model and will go with 
smaller approach in v3.

> I might be completely wrong about this, but if my in-mind picture of 
> Gunyah is correct, I'd have implemented the gunyah core subsytem as 
> mailbox provider, RM as a separate platform driver consuming these 
> mailboxes and in turn being a remoteproc driver, and consoles as 
> remoteproc subdevices. >

The mailbox framework can only fit with message queues and not doorbells 
or vCPUs. The mailbox framework also relies on the mailbox being defined 
in the devicetree. RM is an exceptional case in that it is described in 
the devicetree. Message queues for other VMs would be dynamically 
created at runtime as/when that VM is created. Thus, the client of the 
message queue would need to "own" both the controller and client ends of 
the mailbox.

RM is not loaded or managed by Linux, so I don't think remoteproc 
framework provides us any code re-use except for the subdevices code. 
Remoteproc is much larger framework than just the subdevices code, so I 
don't think it fits well overall.

> I can assume that at some point you would like to use Gunyah to boot 
> secondary VMs from the primary VM by calling into RM, etc.
> Most probably at this moment a VM would be allocated other bells, 
> message queues, etc. If this assumption is correct, them the VM can 
> become a separate device (remoteproc?) in the Linux device tree.
> 
> I might be wrong in any of the assumptions above. Please feel free to 
> correct me. We can then think about a better API for your usecase.
> 

We don't want to limit VM configuration to the devicetree as this limits 
the number and kinds of VMs that can be launched to build time. I'm not 
sure if you might have seen an early presentation of Gunyah at Linaro? 
In the early days of Gunyah, we had static configuration of VMs and many 
properties of the VMs were described in the devicetree. We are moving 
away from static configuration of VMs as much as possible.

[1]: https://chromium.googlesource.com/chromiumos/platform/crosvm




More information about the linux-arm-kernel mailing list