[PATCH 0/7] DMAENGINE: fixes and PrimeCells
dan.j.williams at intel.com
Sun May 9 03:47:20 EDT 2010
On Sat, May 8, 2010 at 8:48 PM, jassi brar <jassisinghbrar at gmail.com> wrote:
> On Sun, May 9, 2010 at 7:24 AM, Dan Williams <dan.j.williams at intel.com> wrote:
>> On Fri, May 7, 2010 at 7:37 PM, jassi brar <jassisinghbrar at gmail.com> wrote:
>>> IMHO, a DMA api should be as quick as possible - callbacks done in IRQ context.
>>> But since there maybe clients that need to do sleepable stuff in
>> None of the current clients sleep in the callback, it's done in
>> soft-irq context. The only expectation is that hard-irqs are enabled
>> during the callback just like timer callbacks. I also would like to
>> see numbers to quantify the claims of slowness.
> The clients evolve around the API so they don't do what the API doesn't
> allow. Any API should try to put as least contraints as possible - you never
> know what kinda clients are gonna arise.
Running a callback in hard-irq context definitely puts constraints on
the callback implementation to be as minimal as possible... and there
is nothing stopping you from doing that today with the existing
dmaengine interface: see idmac_interrupt.
> Lets say a protocol requires 'quick' ACK(within few usecs) on control bus after
> xfer'ing a large packet on data bus. All the client needs is to be
> able to toggle
> some bit of the device controller after the DMA done, which can very well be
> done in IRQ context but maybe too late if the callback is done from a tasklet
> scheduled from DMAC ISR.
> The point being, a DMA API should be able to do callbacks from the IRQ context
> too. That is, assuming the clients know what they do.
You are confusing async_tx constraints and dmaengine. If your driver
is providing the backend of an async_tx operation (currently only
md-raid acceleration) then md-raid can assume that the callback is
being performed in an irq-enabled non-sleepable context. If you are
not providing an async_tx backend service then those constraints are
lifted. I think I would like to make this explicit
CONFIG_DMA_SUPPORTS_ASYNC_TX option to clearly mark the intended use
model of the dma controller.
> Also, I think it is possible to have an API that allows request submission from
> callbacks, which will be a very useful feature.
> Of course, assuming the clients know what they can/can't do (just like current
> DMA API or any other API).
It's a driver specific implementation detail if it supports submission
from the callback. As a "general" rule clients should not assume that
all drivers support this, but in the architecture specific case you
know which driver you are talking to, so this should not be an issue.
>>> callbacks, the API
>>> may do two callbacks - 'quick' in irq context and 'lazy' from
>>> tasklets scheduled from
>>> the IRQ. Most clients will provide either, while some may provide
>>> both callback functions.
>>> b) There seems to be no clear way of reporting failed transfers. The
>>> can get FAIL/SUCSESS but the call is open ended and can be performed
>>> without any time bound after tx_submit. It is not very optimal for
>>> DMAC drivers
>>> to save descriptors of all failed transactions until the channel
>>> is released.
>>> IMHO, provision of status checking by two mechanisms: cookie and dma-done
>>> callbacks is complication more than a feature. Perhaps the dma
>>> engine could provide
>>> a default callback, should the client doesn't do so, and track
>>> done/pending xfers
>>> for such requests?
>> I agree the error handling was designed around mem-to-mem assumptions
>> where failures are due to double-bit ECC errors and other rare events.
> well, neither have I ever seen DMA failure, but a good API shouldn't count
> upon h/w perfection.
It doesn't count on perfection, it treats failures the same way the
cpu would react to a unhandled data abort i.e. panic. I was thinking
of a case like sata where you might see dma errors on a daily basis.
>>> c) Conceptually, the channels are tightly coupled with the DMACs,
>>> there seems to be
>>> no way to be able to schedule a channel among more than one DMACs
>>> in the runtime,
>>> that is if more then one DMAC support the same channel/peripheral.
>>> For example, Samsung's S5Pxxxx have many channels available on more
>>> than 1 DMAC
>>> but for this dma api we have to statically assign channels to
>>> DMACs, which may result in
>>> a channel acquire request rejected just because the DMAC we chose
>>> for it is already
>>> fully busy while another DMAC, which also supports the channel, is idling.
>>> Unless we treat the same peripheral as, say, I2STX_viaDMAC1 and
>>> and allocate double resources for these "mutually exclusive" channels.
>> I am not understanding this example. If both DMACs are registered the
>> dma_filter function to dma_request_channel() can select between them,
> Let me be precise. I2S_Tx fifo(I2S peripheral/channel) can be be reached
> by two DMACs but, of course, the channel can only be active with
> exactly one DMAC.
> So, it is desirable to be able to reach the peripheral via second DMAC should
> the first one is too busy to handle the request. Clearly this is a
> runtime decision.
> FWIHS, I can associate the channel with either of the DMACs and if that DMAC
> can't handle the I2S_Tx request (say due to its all h/w threads
> allocated to other
> request), I can't play audio even if the DMAC might be simply idling.
Ah ok, you want load balancing between channels. In that case the 1:1
nature of dma_request_channel() is not the right interface. We would
need to develop something like an architecture specific implementation
of dma_find_channel() to allow dynamic channel allocation at runtime.
But at that point we will have written something that is very
architecture specific, how could we implement that in a generic api?
Basically if the driver does not want to present resources to generic
clients, does want to use any of the existing generic channel
allocation mechanisms, and has narrow platform-specific needs then why
code to/extend a generic api?
For example the ppe440 dma driver had architecture specific allocation
requirements (see arch/powerpc/include/asm/async_tx.h), but it still
wanted to service generic clients.
>>> d) Something like circular-linked-request is highly desirable for one
>>> of the important DMA
>>> clients i.e, audio.
>> Is this a standing dma chain that periodically a client will say "go"
>> to re-run those operations? Please enlighten me, I've never played
>> with audio drivers.
> Yes, quite similar. Only alsa drivers will say "go" just once at playback start
> and the submitted xfer requests(called periods) are repeatedly transfered in
> circular manner.
> Just a quick snd_pcm_period_elapsed is called in dma-done callback for
> each request(which are usually the same length).
> That way, the client neither have to re-submit requests nor need to do sleepable
> stuff(allocating memory for new reqs and managing local state machine)
> The minimum period size depends on audio latency, which depends on the
> ability to do dma-done callbacks asap.
> This is another example, where the clients wud benefit from callback from IRQ
> context which is also perfectly safe.
Ok, thanks for the explanation.
>>> e) There seems to be no ScatterGather support for Mem to Mem transfers.
>> There has never been a use case, what did you have in mind. If
>> multiple prep_memcpy commands is too inefficient we could always add
>> another operation.
> Just that I believe any API should be as exhaustive and generic as possible.
> I see it possible for multimedia devices/drivers to evolve to start needing
> such capabilities.
> Also, the way DMA API treats memcpy/memset and assume SG reqs to be
> equivalent to MEM<=>DEV request is not very impressive.
> IMHO, any submitted request should be a list of xfers. And an xfer is a
> 'memset' with 'src_len' bytes from 'src_addr' to be copied 'n' times
> at 'dst_addr'.
> Memcpy is just a special case of memset, where n := 1
> This covers most possible use cases while being more compact and future-proof.
No, memset is an operation that does not have a source address and
instead writes a pattern. As for the sg support for mem-to-mem
operations... like most things in Linux it was designed around its
users and none of the users at the time (md-raid, net-dma) required
scatter gather support.
Without seeing code its hard to make a judgment on what can and cannot
fit in dmaengine, but it needs to be judged on what fits in a generic
api and the feasibility of forcing mem-to-mem device-to-mem and
device-to-device dma into one api. I am skeptical we can address all
those concerns, but we at least have something passably functional for
the first two. On the other hand, it's perfectly sane for subarchs
like pxa to have their own dma api. If at the end of the day all that
matters is $arch-specific-dma then why mess around with a generic api?
More information about the linux-arm-kernel