[PATCH 3/6] pci, thunder: Add PCIe host controller devicetree bindings

Robert Richter robert.richter at caviumnetworks.com
Wed Oct 8 01:49:27 PDT 2014


On 07.10.14 16:01:49, Liviu Dudau wrote:
> On Tue, Oct 07, 2014 at 03:27:44PM +0100, Robert Richter wrote:
> > On 24.09.14 18:06:04, Arnd Bergmann wrote:
> > > > +               compatible = "cavium,thunder-pcie";
> > > > +               device_type = "pci";
> > > > +               msi-parent = <&its>;
> > > > +               bus-range = <0 255>;
> > > > +               #size-cells = <2>;
> > > > +               #address-cells = <3>;
> > > > +               reg = <0x8480 0x00000000 0 0x10000000>;  /* Configuration space */
> > > > +               ranges = <0x03000000 0x8010 0x00000000 0x8010 0x00000000 0x70 0x00000000>, /* mem ranges */
> > > > +                       <0x03000000 0x8300 0x00000000 0x8300 0x00000000 0x80 0x00000000>,
> > > > +                       <0x03000000 0x87e0 0x00000000 0x87e0 0x00000000 0x01 0x00000000>;
> > > > +        };
> > > 
> > > If you claim the entire 0-255 bus range, I think you should also
> > > specify a domain, otherwise it's not predictable which domain you
> > > get.
> > 
> > Liviu's code assigns a unique id to the domain if missing, see
> > of_pci_get_domain_nr(). So I don't think we need to add a "pci-domain"
> > property here.
> 
> Not anymore! That function is gone in v12 onwards. What is in -next has
> a new function called of_get_pci_domain_nr() (slight name change) but
> that only gets the value set in the "linux,pci-domain" property of the
> device node. It is the choice of the host bridge driver to use that
> function or to use pci_get_new_domain_nr() which *does* generate an
> unique id every time it gets called.

I am quite confused a bit on which is the latest code base now. I was
looking into Bjorn's pci/host-generic and there is a different
implemetation:

----
/**
 * This function will try to obtain the host bridge domain number by
 * using of_alias_get_id() call with "pci-domain" as a stem.  If that
 * fails, a local allocator will be used.  The local allocator can
 * be requested to return a new domain_nr if the information is missing
 * from the device tree.
 *
 * @node: device tree node with the domain information
 * @allocate_if_missing: if DT lacks information about the domain nr,
 * allocate a new number.
 *
 * Returns the associated domain number from DT, or a new domain number
 * if DT information is missing and @allocate_if_missing is true.  If
 * @allocate_if_missing is false then the last allocated domain number
 * will be returned.
 */
int of_pci_get_domain_nr(struct device_node *node, bool allocate_if_missing)
{
        int domain;

        domain = atomic_read(&of_domain_nr);
        if (domain == -1) {
                /* first run, get max defined domain nr in device tree */
                domain = of_get_max_pci_domain_nr();
                /* then set the start value for allocator to be max + 1 */
                atomic_set(&of_domain_nr, domain + 1);
        }
        domain = of_alias_get_id(node, "pci-domain");
        if (domain == -ENODEV) {
                domain = atomic_read(&of_domain_nr);
                if (allocate_if_missing)
                        atomic_inc(&of_domain_nr);
        }

        return domain;
}
EXPORT_SYMBOL_GPL(of_pci_get_domain_nr);
----

This differs also from v13. Please check.

It would be good to have a stable code base to work with, so I would
prefer incremental patches on top of Bjorn's pci/host-generic.

> > Liviu's DT implementation that assigns a unique number differs a bit
> > from ACPI which states: "If _SEG [aka domain number] does not exist,
> > OSPM assumes that all PCI bus segments are in PCI Segment Group 0."
> > 
> > Maybe of_pci_get_domain_nr() should be similar to ACPI. If there are
> > multiple root bridges, the "pci-domain" property could be forced
> > instead.
> 
> Indeed. But the enforcing is left as an exercise to the host bridge
> implementor for the moment.

Right, can be added later.

Thanks,

-Robert



More information about the linux-arm-kernel mailing list