[PATCH v6 02/21] dt-bindings: Add binding for gunyah hypervisor

Jassi Brar jassisinghbrar at gmail.com
Tue Nov 1 19:01:04 PDT 2022


On Tue, Nov 1, 2022 at 7:12 PM Elliot Berman <quic_eberman at quicinc.com> wrote:
>
>
>
> On 11/1/2022 2:58 PM, Jassi Brar wrote:
> > On Tue, Nov 1, 2022 at 3:35 PM Elliot Berman <quic_eberman at quicinc.com> wrote:
> >>
> >>
> >>
> >> On 11/1/2022 9:23 AM, Jassi Brar wrote:
> >>> On Mon, Oct 31, 2022 at 10:20 PM Elliot Berman <quic_eberman at quicinc.com> wrote:
> >>>>
> >>>> Hi Jassi,
> >>>>
> >>>> On 10/27/2022 7:33 PM, Jassi Brar wrote:
> >>>>    > On Wed, Oct 26, 2022 at 1:59 PM Elliot Berman
> >>>> <quic_eberman at quicinc.com> wrote:
> >>>>    > .....
> >>>>    >> +
> >>>>    >> +        gunyah-resource-mgr at 0 {
> >>>>    >> +            compatible = "gunyah-resource-manager-1-0",
> >>>> "gunyah-resource-manager";
> >>>>    >> +            interrupts = <GIC_SPI 3 IRQ_TYPE_EDGE_RISING>, /* TX
> >>>> full IRQ */
> >>>>    >> +                         <GIC_SPI 4 IRQ_TYPE_EDGE_RISING>; /* RX
> >>>> empty IRQ */
> >>>>    >> +            reg = <0x00000000 0x00000000>, <0x00000000 0x00000001>;
> >>>>    >> +                  /* TX, RX cap ids */
> >>>>    >> +        };
> >>>>    >>
> >>>>    > All these resources are used only by the mailbox controller driver.
> >>>>    > So, this should be the mailbox controller node, rather than the
> >>>>    > mailbox user.> One option is to load gunyah-resource-manager as a
> >>>> module that relies
> >>>>    > on the gunyah-mailbox provider. That would also avoid the "Allow
> >>>>    > direct registration to a channel" hack patch.
> >>>>
> >>>> A message queue to another guest VM wouldn't be known at boot time and
> >>>> thus couldn't be described on the devicetree.
> >>>>
> >>> I think you need to implement of_xlate() ... or please tell me what
> >>> exactly you need to specify in the dt.
> >>
> >> Dynamically created virtual machines can't be known on the dt, so there
> >> is nothing to specify in the DT. There couldn't be a devicetree node for
> >> the message queue client because that client is only exists once the VM
> >> is created by userspace.
> >>
> > The underlying "physical channel" is the synchronous SMC instruction,
> > which remains 1 irrespective of the number of mailbox instances
> > created.
>
> I disagree that the physical channel is the SMC instruction. Regardless
> though, there are num_online_cpus() "physical channels" with this
> perspective.
>
> > So basically you are sharing one resource among users. Why doesn't the
> > RM request the "smc instruction" channel once and share it among
> > users?
>
> I suppose in this scenario, a single mailbox channel would represent all
> message queues? This would cause Linux to serialize *all* message queue
> hypercalls. Sorry, I can only think negative implications.
>
> Error handling needs to move into clients: if a TX message queue becomes
> full or an RX message queue becomes empty, then we'll need to return
> error back to the client right away. The clients would need to register
> for the RTS/RTR interrupts to know when to send/receive messages and
> have retry error handling. If the mailbox controller retried for the
> clients as currently proposed, then we could get into a scenario where a
> message queue could never be ready to send/receive and thus stuck
> forever trying to process that message. The effect here would be that
> the mailbox controller becomes a wrapper to some SMC instructions that
> aren't related at the SMC instruction level.
>
> A single channel would limit performance of SMP systems because only one
> core could send/receive a message. There is no such limitation for
> message queues to behave like this.
>
This is just an illusion. If Gunyah can handle multiple calls from a
VM parallely, even with the "bind-client-to-channel" hack you can't
make sure different channels run on different cpu cores.  If you are
ok with that, you could simply populate a mailbox controller with N
channels and allocate them in any order the clients ask.

-j



More information about the linux-arm-kernel mailing list