[PATCH 0/7] DMAENGINE: fixes and PrimeCells
jassi brar
jassisinghbrar at gmail.com
Sat May 8 23:48:16 EDT 2010
On Sun, May 9, 2010 at 7:24 AM, Dan Williams <dan.j.williams at intel.com> wrote:
> On Fri, May 7, 2010 at 7:37 PM, jassi brar <jassisinghbrar at gmail.com> wrote:
>> IMHO, a DMA api should be as quick as possible - callbacks done in IRQ context.
>> But since there maybe clients that need to do sleepable stuff in
>
> None of the current clients sleep in the callback, it's done in
> soft-irq context. The only expectation is that hard-irqs are enabled
> during the callback just like timer callbacks. I also would like to
> see numbers to quantify the claims of slowness.
The clients evolve around the API so they don't do what the API doesn't
allow. Any API should try to put as least contraints as possible - you never
know what kinda clients are gonna arise.
Lets say a protocol requires 'quick' ACK(within few usecs) on control bus after
xfer'ing a large packet on data bus. All the client needs is to be
able to toggle
some bit of the device controller after the DMA done, which can very well be
done in IRQ context but maybe too late if the callback is done from a tasklet
scheduled from DMAC ISR.
The point being, a DMA API should be able to do callbacks from the IRQ context
too. That is, assuming the clients know what they do.
Also, I think it is possible to have an API that allows request submission from
callbacks, which will be a very useful feature.
Of course, assuming the clients know what they can/can't do (just like current
DMA API or any other API).
>> callbacks, the API
>> may do two callbacks - 'quick' in irq context and 'lazy' from
>> tasklets scheduled from
>> the IRQ. Most clients will provide either, while some may provide
>> both callback functions.
>>
>> b) There seems to be no clear way of reporting failed transfers. The
>> device_tx_status
>> can get FAIL/SUCSESS but the call is open ended and can be performed
>> without any time bound after tx_submit. It is not very optimal for
>> DMAC drivers
>> to save descriptors of all failed transactions until the channel
>> is released.
>> IMHO, provision of status checking by two mechanisms: cookie and dma-done
>> callbacks is complication more than a feature. Perhaps the dma
>> engine could provide
>> a default callback, should the client doesn't do so, and track
>> done/pending xfers
>> for such requests?
>
> I agree the error handling was designed around mem-to-mem assumptions
> where failures are due to double-bit ECC errors and other rare events.
well, neither have I ever seen DMA failure, but a good API shouldn't count
upon h/w perfection.
>> c) Conceptually, the channels are tightly coupled with the DMACs,
>> there seems to be
>> no way to be able to schedule a channel among more than one DMACs
>> in the runtime,
>> that is if more then one DMAC support the same channel/peripheral.
>> For example, Samsung's S5Pxxxx have many channels available on more
>> than 1 DMAC
>> but for this dma api we have to statically assign channels to
>> DMACs, which may result in
>> a channel acquire request rejected just because the DMAC we chose
>> for it is already
>> fully busy while another DMAC, which also supports the channel, is idling.
>> Unless we treat the same peripheral as, say, I2STX_viaDMAC1 and
>> I2STX_viaDMAC2
>> and allocate double resources for these "mutually exclusive" channels.
>
> I am not understanding this example. If both DMACs are registered the
> dma_filter function to dma_request_channel() can select between them,
> right?
Let me be precise. I2S_Tx fifo(I2S peripheral/channel) can be be reached
by two DMACs but, of course, the channel can only be active with
exactly one DMAC.
So, it is desirable to be able to reach the peripheral via second DMAC should
the first one is too busy to handle the request. Clearly this is a
runtime decision.
FWIHS, I can associate the channel with either of the DMACs and if that DMAC
can't handle the I2S_Tx request (say due to its all h/w threads
allocated to other
request), I can't play audio even if the DMAC might be simply idling.
>>
>> d) Something like circular-linked-request is highly desirable for one
>> of the important DMA
>> clients i.e, audio.
>
> Is this a standing dma chain that periodically a client will say "go"
> to re-run those operations? Please enlighten me, I've never played
> with audio drivers.
Yes, quite similar. Only alsa drivers will say "go" just once at playback start
and the submitted xfer requests(called periods) are repeatedly transfered in
circular manner.
Just a quick snd_pcm_period_elapsed is called in dma-done callback for
each request(which are usually the same length).
That way, the client neither have to re-submit requests nor need to do sleepable
stuff(allocating memory for new reqs and managing local state machine)
The minimum period size depends on audio latency, which depends on the
ability to do dma-done callbacks asap.
This is another example, where the clients wud benefit from callback from IRQ
context which is also perfectly safe.
>> e) There seems to be no ScatterGather support for Mem to Mem transfers.
>
> There has never been a use case, what did you have in mind. If
> multiple prep_memcpy commands is too inefficient we could always add
> another operation.
Just that I believe any API should be as exhaustive and generic as possible.
I see it possible for multimedia devices/drivers to evolve to start needing
such capabilities.
Also, the way DMA API treats memcpy/memset and assume SG reqs to be
equivalent to MEM<=>DEV request is not very impressive.
IMHO, any submitted request should be a list of xfers. And an xfer is a
'memset' with 'src_len' bytes from 'src_addr' to be copied 'n' times
at 'dst_addr'.
Memcpy is just a special case of memset, where n := 1
This covers most possible use cases while being more compact and future-proof.
More information about the linux-arm-kernel
mailing list