[PATCHv3 00/14] drivers: mailbox: framework creation

Suman Anna s-anna at ti.com
Mon Apr 29 12:00:07 EDT 2013


Hi Jassi,

On 04/26/2013 11:51 PM, Jassi Brar wrote:
> Hi Suman,
> 
>>> On 26 April 2013 03:59, Suman Anna <s-anna at ti.com> wrote:
>>>> On 04/25/2013 12:20 AM, Jassi Brar wrote:
> 
>>> I never said no-buffering and I never said buffering should be in
>>> controller drivers. In fact I don't remember ever objecting to how
>>> buffering is done in TI's framework.
>>> A controller could service only 1 request at a time so lets give it
>>> just 1 at a time. Let the API handle the complexity of buffering.
>>>
>>
>> Alright, guess this got lost in translation :). I interpreted based on
>> the fact that you wanted to get rid of the size field from the
>> mailbox_msg definition. Do you have a different mechanism in mind for
>> the buffering compared to the present one?
>>
> Sure, a very simple but efficient one. I had started on pseudo code
> implementation the day I first replied, but now I have real code with
> the PL320 controller and the Highbank client converted to the API. All
> that I say features in the new design. Polishing and documentation
> will take just a few hours more. You could see end to end what I have
> been talking about.
>>
>> OK, I didn't think of a no RTR interrupt-based controller. I would thing
>> that such a controller is very rudimentary. I wonder if there are any
>> controllers like this out there.
>>
> One of my controllers is like that :)

I hope it does have a status register atleast, and not the "neither
report nor sense RTR" type.

> 
>>>
>>> BTW, TI's RX mechanism too seems broken for common API. Receiving
>>> every few bytes via 'notify' mechanism is very inefficient. Imagine a
>>> platform with no shared memory between co-processors and the local
>>> wants to diagnose the remote by asking critical data at least KBs in
>>> size.
>>
>> No shared memory between co-processors and a relatively slow wire
>> transport is a bad architecture design to begin with.
>>
> IMHO it's only about private memory. Even if the controller transfers,
> say, 10bytes/interrupt there could always be a requirement to read
> some 1MB region of remote's private memory. And the same logic implies
> that our TX too should be as fast as possible - the remote might need
> its 1MB firmware over the link. So let us just try to serve all
> designs rather than evaluate them :)
> 
> 
>>>  So when API has nothing to do with received packet and the controller
>>> has to get rid of it asap so as to be able to receive the next, IMHO
>>> there should be short-circuit from controller to client via the API.
>>> No delay, no buffering of RX.
>>
>> The current TI design is based on the fact that we can get multiple
>> messages on a single interrupt due to the h/w fifo and the driver takes
>> care of the bottom-half. Leaving it to the client is putting a lot of
>> faith in the client and doesn't scale to multiple clients. The client
>> would have to perform mostly the same as the driver is doing - so this
>> goes back to the base discussion point that we have - which is the lack
>> of support for atomic_context receivers in the current code. I perceive
>> this as an attribute of the controller/mailbox device itself rather than
>> the client.
>>
> Sorry, I don't understand the concern about faith.
> If the controller h/w absolutely can not tell the remote(sender) of a
> received packet (as seems to be your case), its driver shouldn't even
> try to demux the received messages. The client driver must know which
> remotes could send it a message and how to discern them on the
> platform. Some 'server' RX client is needed here.

No demuxing, deliver the message to the different clients. It is a
protocol agreement between the clients on what the message means. Think
of this scenario akin to shared interrupts.

> If the controller could indeed map received packet onto remotes, then
> ideally the controller driver should declare one (RX only) channel for
> each such remote and demux packets onto them.
> In either case, 'notify' mechanism is not necessary.

The notify mechanism was the top-half on the interrupt handling. The
faith part is coming from the fact that you expect all the clients to do
the equivalent of the bottom-half (which would mean some duplication in
the different clients), the OMAP scenario is such that all the different
link interrupts (both rx & tx) are mapped onto a single physical
interrupt. I think this may not be applicable to your usecase, wherein
you probably expect a response back before proceeding.

> 
> 
>> I agree that all remote-ends will not
>> be able to cope up intermixed requests, but isn't this again a
>> controller architecture dependent?
>>
> I think it's more about remote's protocol implementation than
> controller's architecture.

Right, I meant functional integration.

> If tomorrow TI's remote firmware introduces a new set of critical
> commands that may arrive only in a particular sequence, you'll find
> yourself sharing a ride on our dinghy :)
> And Andy already explained where we come from.

This is almost always true when your remote is for offloading some h/w
operations.

regards
Suman




More information about the linux-arm-kernel mailing list