[RFC] mailbox: Add Broadcom BCM2835 mailbox driver
Craig McGeachie
slapdau at yahoo.com.au
Sat Aug 17 07:47:42 EDT 2013
On 08/17/2013 06:54 AM, Stephen Warren wrote:
> On 08/16/2013 03:44 AM, Craig McGeachie wrote:
>> On Sat May 11 00:41:55 EDT 2013, Stephen Warren wrote:
>>> The problem here (with Simon's existing patch) is that it was never sent
>>> upstream. Hence, most likely very few people know about it. If you start
>>> pro-actively sending your work upstream, that'd be great. One possible
>>> issue with your original patch is that there's now a mailbox subsystem
>>> upstream which I don't think your patch used (and at a very quick
>>> glance, Lubomir's patch uses), and any new driver upstream would have to
>>> use that.
Ok. Three very different approaches to this driver. For reference,
I've been looking at:
- Downstream driver (Gray Girling)
https://github.com/raspberrypi/linux/blob/rpi-3.6.y/arch/arm/mach-bcm2708/vcio.c
- Lubomir Rintel's patch
https://github.com/hackerspace/rpi-linux/blob/lr-raspberry-pi/drivers/mailbox/bcm2835-ipc.c
- Simon Arlott's patch
https://github.com/lp0/linux/blob/917bdfc3045151cda896bf0cbf1542340892f58d/drivers/misc/bcm-mbox.c
I think I can see what Simon meant when he stated that his driver
supported concurrent messages per thread when the others didn't. But I
don't think I'd regard it as concurrent. In short he sets up queues for
read and write so that if a write is requested and the mailbox is full,
or if an IRQ comes in for a reply and there is no-one waiting to read.
The IRQ handler checks for pending writes to clear the queue out, and
reads check to see if the response is already queued up.
It's interesting, but I think it suffers the same problem that the
downstream driver has. The mailbox writes and reads are separate calls.
While the downstream driver uses semaphores to keep read calls in step
with data returned in the IRQ handler, it does nothing to ensure that
the reads are done in the same order as the writes. Or that for a given
write, the following read will actually happen. There was an interesting
commit downstream recently where down_interruptible was replaced with
just down. I'm guessing issues with the caller being interrupted and the
messages getting out of whack due to subsequently misaligned write/read
calls.
I think mostly this works by good luck. The calls to bcm_mailbox_write
and bcm_mailbox_read tend to be one after the other, and there are
fairly few calls. The framebuffer driver and vchiq driver both make
only one pair of calls as start up to set up shared memory blocks. I
don't see the other calls being made too often either. But for all
that, I believe there is a fairly nasty race condition in there, which
Simon's version also has.
As a little bit of a side note, I don't think that the channel
identifier in the Videocore mail box implementation actually changes any
logic. See
http://www.raspberrypi.org/phpBB3/viewtopic.php?f=72&t=36619. I'm
guessing a lot, but I think it's just used as a correlation identifier
so that vchiq, framebuffer, and property messages don't interfere with
each other.
I think Lubomir's implementation, with the atomic write/read interface,
might be the safest, but that it gains safety at the cost of
performance. I say performance a bit tongue in cheek because I don't see
the need for that many mailbox calls. The only place where I thought it
might be an issue, vchiq, makes the call once to set an alternative call
mechanism. Unless somebody is checking the chip temperature every 5us,
I can't see it being an issue.
For all that I'd have been tempted to implement a single write interface
that also passed in a reference to call back handler to receive the
reply. And perhaps add a convenience interface that put a synchronous
write/read call on top of that.
> The best approach is probably to abstract the interface so that multiple
> different implementations can share the same API. The API would still
> need to be defined in include/linux/mailbox.h so that everything would
> have access to it.
>
> include/linux/remoteproc.h seems like it might be less HW-specific,
> although I haven't taken any time to understand it at all.
I've been wrestling with this one all day. I had a good hard look at the
remoteproc abstraction. I don't think it's all that abstract. At best I
think it offers a templated algorithm implementation, that is mostly
implemented in terms of call backs. It doesn't provide any real benefit
with its driver registration. The only way to get an rproc is to keep it
after calling rproc_alloc, and then use it for all your subsequent
remoteproc calls. The drivers that use remoteproc provide all the call
backs in the same source files that call the entry points. It's fairly
obvious that they're just calling some functions that will mostly just
call back. The registration and driver aspect of remoteproc_core.c just
looks like over-engineered overhead.
Given the very simple processing model of mailboxes, I think replication
of this design would be a mistake. I don't see any code sharing gains.
Mailboxes really only look appropriate when used directly by higher
level system abstractions.
Interestingly, the PL320 is the only mailbox implementation under
drivers. For example, the OMAP one used by omap_remoteproc.c is in the
OMAP architecture directory.
All in all, I think the PL320 mailbox driver is in the wrong place and
is going to cause a lot of confusion.
Cheers,
Craig.
More information about the linux-rpi-kernel
mailing list