[PATCH v3 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host

Chen Wang unicorn_wang at outlook.com
Sat Jan 25 18:27:27 PST 2025


hello~

On 2025/1/23 6:21, Bjorn Helgaas wrote:
> On Wed, Jan 15, 2025 at 03:06:37PM +0800, Chen Wang wrote:
>> From: Chen Wang <unicorn_wang at outlook.com>
>>
>> Add binding for Sophgo SG2042 PCIe host controller.
>> +  sophgo,link-id:
>> +    $ref: /schemas/types.yaml#/definitions/uint32
>> +    description: |
>> +      SG2042 uses Cadence IP, every IP is composed of 2 cores (called link0
>> +      & link1 as Cadence's term). Each core corresponds to a host bridge,
>> +      and each host bridge has only one root port. Their configuration
>> +      registers are completely independent. SG2042 integrates two Cadence IPs,
>> +      so there can actually be up to four host bridges. "sophgo,link-id" is
>> +      used to identify which core/link the PCIe host bridge node corresponds to.
> IIUC, the registers of Cadence IP 1 and IP 2 are completely
> independent, and if you describe both of them, you would have separate
> "pcie at 62000000" stanzas with separate 'reg' and 'ranges' properties.

To be precise, for two cores of a cadence IP, each core has a separate 
set of configuration registers, that is, the configuration of each core 
is completely independent. This is also what I meant in the binding by 
"Each core corresponds to a host bridge, and each host bridge has only 
one root port. Their configuration registers are completely 
independent.". Maybe the "Their" here is a bit unclear. My original 
intention was to refer to the core. I can improve this description next 
time.

>  From the driver, it does not look like the registers for Link0 and
> Link1 are independent, since the driver claims the
> "sophgo,sg2042-pcie-host", which includes two Cores, and it tests
> pcie->link_id to select the correct register address and bit mask.
In the driver code, one "sophgo,sg2042-pcie-host" corresponds to one 
core, not two. So, you can see in patch 4 of this pathset [1], 3 pcie 
host-bridge nodes are defined, pcie_rc0 ~ pcie_rc2, each corresponding 
to one core.

[1]:https://lore.kernel.org/linux-riscv/4a1f23e5426bfb56cad9c07f90d4efaad5eab976.1736923025.git.unicorn_wang@outlook.com/


I also need to explain that link0 and link1 are actually completely 
independent in PCIE processing, but when sophgo implements the internal 
msi controller for PCIE, its design is not good enough, and the 
registers for processing msi are not made separately for link0 and 
link1, but mixed together, which is what I said 
cdns_pcie0_ctrl/cdns_pcie1_ctrl. In these two new register files added 
by sophgo (only involving MSI processing), take the second cadence IP as 
an example, some registers are used to control the msi controller of 
pcie_rc1 (corresponding to link0), and some registers are used to 
control the msi controller of pcie_rc2 (corresponding to link1). In a 
more complicated situation, some bits in a register are used to control 
pcie_rc1, and some bits are used to control pcie_rc2. This is why I have 
to add the link_id attribute to know whether the current PCIe host 
bridge corresponds to link0 or link1, so that when processing the msi 
controller related to this pcie host bridge, we can find the 
corresponding register or even the bit of a register in cdns_pcieX_ctrl.


> "sophgo,link-id" corresponds to Cadence documentation, but I think it
> is somewhat misleading in the binding because a PCIe "Link" refers to
> the downstream side of a Root Port.  If we use "link-id" to identify
> either Core0 or Core1 of a Cadence IP, it sort of bakes in the
> idea that there can never be more than one Root Port per Core.
The fact is that for the cadence IP used by sg2042, only one root port 
is supported per core.
>
> Since each Core is the root of a separate PCI hierarchy, it seems like
> maybe there should be a stanza for the Core so there's a place where
> per-hierarchy things like "linux,pci-domain" properties could go,
> e.g.,
>
>    pcie at 62000000 {		// IP 1, single-link mode
>      compatible = "sophgo,sg2042-pcie-host";
>      reg = <...>;
>      ranges = <...>;
>
>      core0 {
>        sophgo,core-id = <0>;
>        linux,pci-domain = <0>;
>
>        port {
>          num-lanes = <4>;	// all lanes
>        };
>      };
>    };
>
>    pcie at 82000000 {		// IP 2, dual-link mode
>      compatible = "sophgo,sg2042-pcie-host";
>      reg = <...>;
>      ranges = <...>;
>
>      core0 {
>        sophgo,core-id = <0>;
>        linux,pci-domain = <1>;
>
>        port {
>          num-lanes = <2>;	// half of lanes
>        };
>      };
>
>      core1 {
>        sophgo,core-id = <1>;
>        linux,pci-domain = <2>;
>
>        port {
>          num-lanes = <2>;	// half of lanes
>        };
>      };
>    };

Based on the above analysis, I think the introduction of a three-layer 
structure (pcie-core-port) looks a bit too complicated for candence IP. 
In fact, the source of the discussion at the beginning of this issue was 
whether some attributes should be placed under the host bridge or the 
root port. I suggest that adding the root port layer on the basis of the 
existing patch may be enough. What do you think?

e.g.,

pcie_rc0: pcie at 7060000000 {
     compatible = "sophgo,sg2042-pcie-host";
     ...... // host bride level properties
     sophgo,link-id = <0>;
     port {
         // port level properties
         vendor-id = <0x1f1c>;
         device-id = <0x2042>;
         num-lanes = <4>;
     }
};

pcie_rc1: pcie at 7062000000 {
     compatible = "sophgo,sg2042-pcie-host";
     ...... // host bride level properties
     sophgo,link-id = <0>;
     port {
         // port level properties
         vendor-id = <0x1f1c>;
         device-id = <0x2042>;
         num-lanes = <2>;
     };
};

pcie_rc2: pcie at 7062800000 {
     compatible = "sophgo,sg2042-pcie-host";
     ...... // host bride level properties
     sophgo,link-id = <0>;
     port {
         // port level properties
         vendor-id = <0x1f1c>;
         device-id = <0x2042>;
         num-lanes = <2>;
     }
};

[......]

>> +examples:
>> +  - |
>> +    #include <dt-bindings/interrupt-controller/irq.h>
>> +
>> +    pcie at 62000000 {
>> +      compatible = "sophgo,sg2042-pcie-host";
>> +      device_type = "pci";
>> +      reg = <0x62000000  0x00800000>,
>> +            <0x48000000  0x00001000>;
>> +      reg-names = "reg", "cfg";
>> +      #address-cells = <3>;
>> +      #size-cells = <2>;
>> +      ranges = <0x81000000 0 0x00000000 0xde000000 0 0x00010000>,
>> +               <0x82000000 0 0xd0400000 0xd0400000 0 0x0d000000>;
>> +      bus-range = <0x00 0xff>;
>> +      vendor-id = <0x1f1c>;
>> +      device-id = <0x2042>;
>> +      cdns,no-bar-match-nbits = <48>;
>> +      sophgo,link-id = <0>;
>> +      sophgo,syscon-pcie-ctrl = <&cdns_pcie1_ctrl>;
>> +      msi-parent = <&msi_pcie>;
>> +      msi_pcie: msi {
>> +        compatible = "sophgo,sg2042-pcie-msi";
>> +        msi-controller;
>> +        interrupt-parent = <&intc>;
>> +        interrupts = <123 IRQ_TYPE_LEVEL_HIGH>;
>> +        interrupt-names = "msi";
>> +      };
>> +    };
> It would be helpful for me if the example showed how both link-id 0
> and link-id 1 would be used (or whatever they end up being named).
> I assume both have to be somewhere in the same pcie at 62000000 device to
> make this work.
>
> Bjorn



More information about the linux-riscv mailing list