[PATCH v18 5/5] remoteproc: Add initial zynqmp R5 remoteproc driver

Ben Levinsky BLEVINSK at xilinx.com
Wed Oct 7 10:31:15 EDT 2020



> -----Original Message-----
> From: Michael Auchter <michael.auchter at ni.com>
> Sent: Tuesday, October 6, 2020 3:21 PM
> To: Ben Levinsky <BLEVINSK at xilinx.com>
> Cc: Ed T. Mooring <emooring at xilinx.com>; Stefano Stabellini
> <stefanos at xilinx.com>; Michal Simek <michals at xilinx.com>;
> devicetree at vger.kernel.org; mathieu.poirier at linaro.org; linux-
> remoteproc at vger.kernel.org; linux-kernel at vger.kernel.org;
> robh+dt at kernel.org; linux-arm-kernel at lists.infradead.org
> Subject: Re: RE: RE: [PATCH v18 5/5] remoteproc: Add initial zynqmp R5
> remoteproc driver
> 
> On Tue, Oct 06, 2020 at 09:46:38PM +0000, Ben Levinsky wrote:
> >
> >
> > > -----Original Message-----
> > > From: Michael Auchter <michael.auchter at ni.com>
> > > Sent: Tuesday, October 6, 2020 2:32 PM
> > > To: Ben Levinsky <BLEVINSK at xilinx.com>
> > > Cc: Ed T. Mooring <emooring at xilinx.com>; sunnyliangjy at gmail.com;
> > > punit1.agrawal at toshiba.co.jp; Stefano Stabellini <stefanos at xilinx.com>;
> > > Michal Simek <michals at xilinx.com>; devicetree at vger.kernel.org;
> > > mathieu.poirier at linaro.org; linux-remoteproc at vger.kernel.org; linux-
> > > kernel at vger.kernel.org; robh+dt at kernel.org; linux-arm-
> > > kernel at lists.infradead.org
> > > Subject: Re: RE: [PATCH v18 5/5] remoteproc: Add initial zynqmp R5
> > > remoteproc driver
> > >
> > > On Tue, Oct 06, 2020 at 07:15:49PM +0000, Ben Levinsky wrote:
> > > >
> > > > Hi Michael,
> > > >
> > > > Thanks for the review
> > > >
> > >
> > > < ... snip ... >
> > >
> > > > > > +	z_rproc = rproc->priv;
> > > > > > +	z_rproc->dev.release = zynqmp_r5_release;
> > > > >
> > > > > This is the only field of z_rproc->dev that's actually initialized, and
> > > > > this device is not registered with the core at all, so zynqmp_r5_release
> > > > > will never be called.
> > > > >
> > > > > Since it doesn't look like there's a need to create this additional
> > > > > device, I'd suggest:
> > > > > 	- Dropping the struct device from struct zynqmp_r5_rproc
> > > > > 	- Performing the necessary cleanup in the driver remove
> > > > > 	  callback instead of trying to tie it to device release
> > > >
> > > > For the most part I agree. I believe the device is still needed for
> > > > the mailbox client setup.
> > > >
> > > > As the call to mbox_request_channel_byname() requires its own device
> > > > that has the corresponding child node with the corresponding
> > > > mbox-related properties.
> > > >
> > > > With that in mind, is it still ok to keep the device node?
> > >
> > > Ah, I see. Thanks for the clarification!
> > >
> > > Instead of manually dealing with the device node creation for the
> > > individual processors, perhaps it makes more sense to use
> > > devm_of_platform_populate() to create them. This is also consistent with
> > > the way the TI K3 R5F remoteproc driver does things.
> > >
> > > Cheers,
> > >  Michael
> >
> > I've been working on this today for a way around it and found one that I
> think works with your initial suggestion,
> > - in z_rproc, change dev from struct device to struct device*
> > 	^ the above is shown the usage thereof below. It is there for the
> mailbox setup.
> > - in driver probe:
> > 	- add list_head to keep track of each core's z_rproc and for the driver
> remove clean up
> > 	- in each core's probe (zynqmp_r5_probe) dothe following:
> >
> >
> >        rproc_ptr = rproc_alloc(dev, dev_name(dev), &zynqmp_r5_rproc_ops,
> >                                                   NULL, sizeof(struct zynqmp_r5_rproc));
> >         if (!rproc_ptr)
> >                 return -ENOMEM;
> >         z_rproc = rproc_ptr->priv;
> >         z_rproc->dt_node = node;
> >         z_rproc->rproc = rproc_ptr;
> >         z_rproc->dev = &rproc_ptr->dev;
> >         z_rproc->dev->of_node = node;
> > where node is the specific R5 core's of_node/ Device tree node.
> >
> > the above preserves most of the mailbox setup code.
> 
> I see how this works, but it feels a bit weird to me to be overriding
> the remoteproc dev's of_node ptr. Personally I find the
> devm_of_platform_populate() approach a bit less confusing.
> 
> But, it's also not my call to make ;). Perhaps a remoteproc maintainer
> can chime in here.
> 
Fair enough. The way I see this, there is still a need for a struct device* in the zynqmp R5 rproc structure.

If we look at the TI K3 R5 remoteproc patch, https://patchwork.kernel.org/patch/11763795/ ,
there is a call to devm_of_platform_populate in k3_r5_probe similar to your suggestion. I can look to emulate it similarly in the Xilinx remoteproc driver.

Even still there is a usage of the struct device * in the TI K3 R5 remoteproc structure for the same reason of setting up mailbox for each core as detailed below (can be found in the same link I posted):


** here is where the device* is stored at probe
- k3_r5_probe calls k3_r5_cluster_rproc_init 

static int k3_r5_cluster_rproc_init(struct platform_device *pdev) {
    struct k3_r5_rproc *kproc;
    struct device *cdev;
    ......<snip>...
	core1 = list_last_entry(&cluster->cores, struct k3_r5_core, elem);
	list_for_each_entry(core, &cluster->cores, elem) {
		cdev = core->dev;
                           ......<snip>...
                           kproc->dev = cdev;
...
}


and then back in the mailbox initialization:
static int k3_r5_rproc_start(struct rproc *rproc) {
    ...
    struct k3_r5_rproc *kproc = rproc->priv;
    struct device *dev = kproc->dev;
    ...
    client->dev = dev; <--- this is needed when requesting the mailbox
This needs to be device with the corresponding of_node that has the mbox-related properties in it




This is the example I based my usage of the struct device* field in the Soc Specific remoteproc structure off of.
Given that I can still proceed and update with the other suggestions?

Thanks
Ben

> >
> >
> > With this, I have already successfully done the following in a v19 patch
> > - move all the previous driver release code to remove
> > - able to probe, start/stop r5, driver remove repeatedly
> >
> > Also, this mimics the TI R5 driver code as each core's rproc has a list_head
> and they have a structure for the cluster which among other things maintains
> a linked list of the cores' specific rproc information.
> >
> > Thanks
> > Ben



More information about the linux-arm-kernel mailing list