[PATCH v12 11/31] documentation: iommu: add binding document of Exynos System MMU
Dave.Martin at arm.com
Thu May 1 04:15:35 PDT 2014
On Tue, Apr 29, 2014 at 10:46:18PM +0200, Arnd Bergmann wrote:
> On Tuesday 29 April 2014 19:16:02 Dave Martin wrote:
> > For example, suppose devices can post MSIs to an interrupt controller
> > via a mailbox accessed through the IOMMU. Suppose also that the IOMMU
> > generates MSIs itself in order to signal management events or faults
> > to a host OS. Linux (as host) will need to configure the interrupt
> > controller separately for the IOMMU and for the IOMMU clients. This
> > means that Linux needs to know which IDs may travel to the interrupt
> > controller for which purpose, and they must be distinct.
> I don't understand. An MSI controller is just an address that acts
> as a DMA slave for a 4-byte inbound data packet. It has no way of
> knowing who is sending data, other than by the address or the data
> sent to it. Are you talking of something else?
Oops, looks like there are a few points I failed to respond to here...
I'm not an expert on PCI -- I'm prepared to believe it works that way.
GICv3 can descriminate between different MSI senders based on ID
signals on the bus.
> > I'm not sure whether there is actually a SoC today that is MSI-capable
> > and contains an IOMMU, but all the components to build one are out
> > there today. GICv3 is also explicitly designed to support such
> > systems.
> A lot of SoCs have MSI integrated into the PCI root complex, which
> of course is pointless from MSI perspective, as well as implying that
> the MSI won't go through the IOMMU.
> We have briefly mentioned MSI in the review of the Samsung GH7 PCI
> support. It's possible that this one can either use the built-in
> MSI or the one in the GICv2m.
We are likely to get non-PCI MSIs in future SoC systems too, and there
are no standards governing how such systems should look.
> > In the future, it is likely that "HSA"-style GPUs and other high-
> > throughput virtualisable bus mastering devices will have capabilities
> > of this sort, but I don't think there's anything concrete yet.
> Wouldn't they just have IOMMUs with multiple contexts?
Who knows? A management component of the GPU that is under exclusive
control of the host or hypervisor might be wired up to bypass the IOMMU
I'm not saying this kind of thing definitely will happen, but I can't
say confidently that it won't.
> > > how it might be wired up in hardware, but I don't know what it's good for,
> > > or who would actually do it.
> > >
> > > > > A variation would be to not use #iommu-cells at all, but provide a
> > > > > #address-cells / #size-cells pair in the IOMMU, and have a translation
> > > > > as we do for dma-ranges. This is probably most flexible.
> > > >
> > > > That would also allow us to describe ranges of master IDs, which we need for
> > > > things like PCI RCs on the ARM SMMU. Furthermore, basic transformations of
> > > > these ranges could also be described like this, although I think Dave (CC'd)
> > > > has some similar ideas in this area.
> > Ideally, we would reuse the ePAPR "ranges" concept and describe the way
> > sideband ID signals propagate down the bus hierarchy in a similar way.
> It would be 'dma-ranges'. Unfortunately that would imply that each DMA
> master is connected to only one IOMMU, which you say is not necessarily
> the case. The simpler case of a device is only a master on a single IOMMU
> but can use multiple contexts would however work fine with dma-ranges.
Partly, yes. The concept embodied by "dma-ranges" is correct, but the
topological relationship is not: the assumption that a master device
always masters onto its parent node doesn't work for non-tree-like
More information about the linux-arm-kernel