[PATCHv4 2/7] driver/core: Populate IOMMU'able devices in order

Hiroshi Doyu hdoyu at nvidia.com
Wed Nov 13 02:23:54 EST 2013


On Wed, 13 Nov 2013 00:34:20 +0100
Stephen Warren <swarren at wwwdotorg.org> wrote:

> On 11/11/2013 01:31 AM, Hiroshi Doyu wrote:
> > An "IOMMU device" on the bus is poplulated first, "IOMMU'able devices"
> > are done later.
> > 
> > With CONFIG_OF_IOMMU, "#stream-id-cells" DT binding would be used to
> > identify whether a device is IOMMU'able or not. If a device is
> > IOMMU'able, we'll defer to populate that device till an iommu device
> > is populated. Once an iommu device is populated, "dev->bus->iommu_ops"
> > is set in the bus. Then, those defered IOMMU'able devices are
> > populated and configured as IOMMU'abled with help of the already
> > populated iommu device via iommu_ops->add_device().
> 
> This looks fairly neat and clean.
> 
> I'm still worried about using #stream-id-cells in DT nodes though. While
> I do understand that the *Linux* device model currently only allows each
> struct device to be affected by a single IOMMU, I worry that encoding
> that same restriction into DT is a mistake. I'd far rather see a
> property like:
> 
> SMMU:
>     smmu: smmu at xxxxxx {
>         #smmu-cells = <1>;
>     }
> 
> Affected device:
>     smmus = <&smmu 1>;
>     (perhaps with smmu-names too)
> 
> That would allow the DT to represent basically arbitrary HW configurations.

True, and also can solve multi IOMMU problem as well.

> The implementation of this patch would then be almost as trivial; you'd
> just need to walk the smmus property to find each phandle in turn, just
> like any other phandle+specifier property, and validate that the SMMU
> driver was already probe()d for each.

This seems to be almost same as the previous v3 DT bindings, and if we
introduce 64 bitmap newly, this DT bindings would be something like
below:

   smmu: iommu at xxxxxx {
       #smmu-cells = <2>;
       ......
   };

   host1x {
           compatible = "nvidia,tegra30-host1x", "simple-bus";
           nvidia,memory-clients = <&smmu 0x??????? 0x???????>;
           ....
           gr3d {
                   compatible = "nvidia,tegra30-gr3d";
                   nvidia,memory-clients = <&smmu 0x??????? 0x???????>;
           }

If a device is attached to multiple IOMMU H/Ws,

           gr3d {
                   compatible = "nvidia,tegra30-gr3d";
                   nvidia,memory-clients = <&smmu 0x??????? 0x??????? &gart 0x???????>;
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Then, the problem is the binding "name" and its "scope". This original
patch works with "#stream-id-cells" in driver core because I assumed
that "#stream-id-cells" can indicate globally that a device can be an
IOMMU master.

We may be able to have some kind of callback function which checks
"#stream-id-cells" *in* SMMU driver, but at least this function needs to
be registered in the bus at very early stage, iommu_ops->is_iommu_master().
But this cannot be done with bus->iommu_ops since bus->iommu_ops is set
after IOMMU is populated. A kind of Chikin or the egg problem.

Having a global bindings which indicates a device's IOMMU
master'ability is quite convenient. For example, "iommu" and
"#iommu-cells" without refering any local data. Then the above
DT would be:

   smmu: iommu at xxxxxx {
       #iommu-cells = <2>;
       ^^^^^^^^^^^^^^^^^^
   };

   host1x {
           compatible = "nvidia,tegra30-host1x", "simple-bus";
           iommu = <&smmu 0x??????? 0x???????>;
	   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
           gr3d {
                   compatible = "nvidia,tegra30-gr3d";
                   iommu = <&smmu 0x??????? 0x???????>;
           }

What do you think to have a global IOMMU DT bindings?



More information about the linux-arm-kernel mailing list