[PATCH v7 01/10] ARM: davinci: move private EDMA API to arm/common

Linus Walleij linus.walleij at linaro.org
Mon Feb 4 15:29:46 EST 2013


On Mon, Feb 4, 2013 at 8:22 PM, Cyril Chemparathy <cyril at ti.com> wrote:

> Based on our experience with fitting multiple subsystems on top of this
> DMA-Engine driver, I must say that the DMA-Engine interface has proven
> to be a less than ideal fit for the network driver use case.
>
> The first problem is that the DMA-Engine interface expects to "push"
> completed traffic up into the upper layer as a part of its callback.
> This doesn't fit cleanly with NAPI, which expects to "pull" completed
> traffic from below in the NAPI poll.  We've somehow kludged together a
> solution around this, but it isn't very elegant.

I cannot understand the actual technical problem from the above
paragraphs though. dmaengine doesn't have a concept of pushing
nor polling, it basically copies streams of words from A to B, where
A/B can be a device or a buffer, nothing else.

The thing you're looking for sounds more like an adapter on top
of dmaengine, which can surely be constructed, some
drivers/dma/dmaengine-napi.c or whatever.

> The second problem is one of binding fixed DMA resources to fixed users.
>   AFAICT, the stock DMA-Engine mechanism works best when one DMA
> resource is as good as any other.

The filter function picks a channel for whatever reason. That reason
can be, well whatever. Some engines have a clever mechanism to
select resources on the other end.

Then for tying devices to channels we have the dmaengine
DT branch:
http://git.infradead.org/users/vkoul/slave-dma.git/shortlog/refs/heads/topic/dmaengine_dt

This stuff didn't go into v3.8 but you can *sure* expect it
to be in v3.9.

Or are you referring to a multi-engine scenario? Say if there is engine
A and B and depending on circumstances A or B may be preferred
in some order (and permutations of this problem). That is currently
identified as a shortcoming that we need help to address.

> To get over this problem, we've added
> support for named channels, and drivers specifically request for a DMA
> resource by name.  Again, this is less than ideal.

Jon Hunter has been working on a mechanism to look up DMA channels
from struct device *, dev_name() or a device tree node for example.
Just like we do with clocks or regulators.

Look at this patch from the dmaengine_dt branch:
http://git.infradead.org/users/vkoul/slave-dma.git/commitdiff/528499a7037ebec0636d928f88cd783c618df3c5

Looks up an optionally named channel for a certain
device.

It currently only supports device tree, but you are free to
patch in whatever mechanism you need there. Static tables
in platform data works too. Just nobody did it.

So go ahead and hack on dma_request_slave_channel().
(I would just branch of the DT branch.)

> We found that virtio devices offer a more elegant solution to this
> problem.  First, the virtqueue interface is a much better fit into NAPI
> (callback --> napi schedule, napi poll --> get_buf), and this eliminates
> the need for aforementioned kludges in the code.  Second, the virtio
> device infrastructure nicely uses the device model to solve the problem
> of binding DMA users to specific DMA resources.

Not that I understand the polling issue, but it sounds to me like
what Jon is doing is similar.

Surely the way to look up resources cannot be paramount in this
discussion, I think the real problem must be your specific networking
usecase, so we need to drill into that.

Yours,
Linus Walleij



More information about the linux-arm-kernel mailing list