[PATCH v14 2/4] CMDQ: Mediatek CMDQ driver
Horng-Shyang Liao
hs.liao at mediatek.com
Wed Oct 5 05:31:45 PDT 2016
On Wed, 2016-10-05 at 09:07 +0530, Jassi Brar wrote:
> On 5 October 2016 at 08:24, Horng-Shyang Liao <hs.liao at mediatek.com> wrote:
> > On Fri, 2016-09-30 at 17:47 +0800, Horng-Shyang Liao wrote:
> >> On Fri, 2016-09-30 at 17:11 +0800, CK Hu wrote:
>
> >
> > After I trace mailbox driver, I realize that CMDQ driver cannot use
> > tx_done.
> >
> > CMDQ clients will flush many tasks into CMDQ driver, and then CMDQ
> > driver will apply these tasks into GCE HW "immediately". These tasks,
> > which are queued in GCE HW, may not execute immediately since they
> > may need to wait event(s), e.g. vsync.
> >
> > However, in mailbox driver, mailbox uses a software buffer to queue
> > sent messages. It only sends next message until previous message is
> > done. This cannot fulfill CMDQ's requirement.
> >
> I understand
> a) GCE HW can internally queue many tasks in some 'FIFO'
> b) Execution of some task may have to wait until some external event
> occurs (like vsync)
> c) GCE does not generate irq/flag for each task executed (?)
>
> If so, may be your tx_done should return 'true' so long as the GCE HW
> can accept tasks in its 'FIFO'. For mailbox api, any task that is
> queued on GCE, is assumed to be transmitted.
>
> > Quote some code from mailbox driver. Please notice "active_req" part.
> >
> > static void msg_submit(struct mbox_chan *chan)
> > {
> > ...
> > if (!chan->msg_count || chan->active_req)
> > goto exit;
> > ...
> > err = chan->mbox->ops->send_data(chan, data);
> > if (!err) {
> > chan->active_req = data;
> > chan->msg_count--;
> > }
> > ...
> > }
> >
> > static void tx_tick(struct mbox_chan *chan, int r)
> > {
> > ...
> > spin_lock_irqsave(&chan->lock, flags);
> > mssg = chan->active_req;
> > chan->active_req = NULL;
> > spin_unlock_irqrestore(&chan->lock, flags);
> > ...
> > }
> >
> > Current workable CMDQ driver uses mbox_client_txdone() to prevent
> > this issue, and then uses self callback functions to handle done tasks.
> >
> > int cmdq_task_flush_async(struct cmdq_client *client, struct cmdq_task
> > *task, cmdq_async_flush_cb cb, void *data)
> > {
> > ...
> > mbox_send_message(client->chan, task);
> > /* We can send next task immediately, so just call txdone. */
> > mbox_client_txdone(client->chan, 0);
> > ...
> > }
> >
> > Another solution is to use rx_callback; i.e. CMDQ mailbox controller
> > call mbox_chan_received_data() when CMDQ task is done. But, this may
> > violate the design of mailbox. What do you think?
> >
> If my point (c) above does not hold, maybe look at implementing
> tx_done() callback and submit next task from the callback of last
> done.
Hi Jassi,
For point (c), GCE irq means 1~n tasks done or
0~n tasks done + 1 task error.
In irq, we can know which tasks are done by register and GCE pc.
As I mentioned before, we cannot submit next task after previous task
call tx_done. We need to submit multiple tasks to GCE HW immediately
and queue them in GCE HW. Let me explain this requirement by mouse
cursor example. User may move mouse quickly between two vsync, so DRM
may update display registers frequently. For CMDQ, that means many tasks
are flushed into CMDQ driver, and CMDQ driver needs to process all of
them in next vblank. Therefore, we cannot block any CMDQ task in SW
buffer.
CMDQ needs to call callback function to notice clients which tasks are
done. In my previous e-mail, I mentioned that rx_callback may be an
alternative solution. However, it seems to violate the design of
mailbox. Therefore, I think mailbox may not have a good solution for
CMDQ callback currently. IMHO, the better way is to use CMDQ self
callback for now.
Thanks,
HS
More information about the linux-arm-kernel
mailing list