[PATCH v12 11/31] documentation: iommu: add binding document of Exynos System MMU

Grant Grundler grundler at chromium.org
Thu May 1 09:42:24 PDT 2014


On Thu, May 1, 2014 at 6:29 AM, Arnd Bergmann <arnd at arndb.de> wrote:
...
>> GICv3 can descriminate between different MSI senders based on ID
>> signals on the bus.
>
> Any idea what this is good for? Do we have to use it? It probably doesn't
> fit very well into the way Linux handles MSIs today.

I can see this being used for diagnosing failures - e.g. hung system
would leave tracks if an interrupt was or was not provided by the
device. I can't think of a reason why Linux MSI code would need to
support this though.

...
>> We are likely to get non-PCI MSIs in future SoC systems too, and there
>> are no standards governing how such systems should look.

Why look to the future when one can look in the past? :)

PA-RISC was designed in the 1980s to use "MSI" to generate all CPU
interrupts. This is as simple as it gets.

The concept is identical to MSI: MMIO routeable address with a payload
to indicate interrupt source (or map that source to some "vector
table".) Look at arch/parisc/kernel/smp.c:ipi_send() for an example
(note "p->hpa" is "processor->host physical address").

Current and  future products do the same things but add more
"features" that make this more complicated. But the basic transaction
will be identical and it needs to be routed like any other MMIO
transaction by any bridge (including IOMMUs).

> I wouldn't call that MSI though -- using the same term in the code
> can be rather confusing. There are existing SoCs that use message
> based interrupt notification. We are probably better off modeling
> those are regular irqchips in Linux and DT, given that they may
> not be bound by the same constraints as PCI MSI.

PCI device is one "source" for MSI and PCI defines how to initialize
an MSI source.  The target is not PCI (or not even Intel). Intel
defines how MSI works on their chipsets/CPUs. Other can still do it
differently.

I'm perfectly ok with using "MSI" to refer to any "in band interrupt message".

>> Who knows?  A management component of the GPU that is under exclusive
>> control of the host or hypervisor might be wired up to bypass the IOMMU
>> completely.

Does the linux kernel need to know about a device/component that it
can't control?
Either linux kernel shouldn't be told about it OR should ignore this
device/component.

>> Partly, yes.  The concept embodied by "dma-ranges" is correct, but the
>> topological relationship is not: the assumption that a master device
>> always masters onto its parent node doesn't work for non-tree-like
>> topologies.
>
> In almost all cases it will fit. When it doesn't, we can work around it by
> defining virtual address spaces the way that the PCI binding does. The only
> major exception that we know we have to handle is IOMMUs.

MMIO routing (and thus dma-ranges) is a "graph" (very comparable to
network routing). Some simple (e.g. PCI) implementations look like a
tree - but that's not the general case.

DMA cares about more than routing though - cache coherency and
performance (BW and latency) matter too.

cheers,
grant



More information about the linux-arm-kernel mailing list