[RFC v1] PCIe support for the Armada 370 and Armada XP SoCs

Thierry Reding thierry.reding at avionic-design.de
Sun Dec 16 07:33:40 EST 2012


On Fri, Dec 14, 2012 at 10:27:29AM -0700, Jason Gunthorpe wrote:
> On Fri, Dec 14, 2012 at 04:10:45PM +0100, Thierry Reding wrote:
> > > So I tried this today and it breaks horribly. There's some internal
> > > abort or something. I don't have access to the hardware right now and
> > > forgot to save the log output, but I can follow up in the morning. Also
> > > up until the abort, bus 0000:00.0 was identified as the virtual switch
> > > within the FPGA that's connected to port 0, so that would indicate that
> > > it isn't in fact compliant and neither root port is reachable via the
> > > regular mapping.
> > 
> > So here's the output of the crash when removing the special cases that I
> > promised:
> > 
> > [    2.662948] tegra-pcie 80003000.pcie-controller: PCI host bridge to bus 0000:00
> > [    2.670271] pci_bus 0000:00: root bus resource [io  0x82000000-0x8200ffff]
> > [    2.687624] pci_bus 0000:00: root bus resource [mem 0x81000000-0xa7ffffff]
> > [    2.696002] pci_bus 0000:00: root bus resource [mem 0xb0000000-0xb7ffffff pref]
> > [    2.708361] pci_bus 0000:00: root bus resource [bus 00-ff]
> > [    2.746728] pci 0000:00:00.0: [1556:4711] type 01 class 0x060400
> 
> This is your 
> 
>         02:00.0 PCI bridge: Avionic Design GmbH FPGA PCIe PCI-to-PCI (P2P) Bridge
> 
> Device, right?

No, the 1556:4711 is the PLDA bridge. Basically that's the top-entity in
the FPGA that handles the link layer from the Tegra SoC to the FPGA.

> Just looking at the driver a bit, and your results, it looks to me
> like the config space for the internal devices is seperate from the
> register to send config packets to the bus(es).
> 
> So, it looks like what I suggested earlier is the trouble, you are
> missing the host bridge configuration
> 
> If you change tegra_pcie_read/write_conf to be more like:
> 
> static int tegra_pcie_read_conf(struct pci_bus *bus, unsigned int devfn,
>                                 int where, int size, u32 *val)
> {
> 	/* Check the host bridge bus, all the pci-pci bridge ports
> 	   live here */
> 	if (bus->number == 0) {
>            if (PCI_SLOT(devfn) >= 0x10 &&
> 	       PCI_SLOT(devfn) < 0x10 + tegra_pcie.num_ports &&
> 	       PCI_FUNC(devfn) == 0) {
>                addr = tegra_pcie.port[PCI_SLOT(devfn) - 0x10].base + (where & ~0x3);
>            } else {
> 	       *val = 0xffffffff;
>                return PCIBIOS_DEVICE_NOT_FOUND;
>            }
>        }
> }
> 
> Ie route access for 00:1N.0 to the configuration space on bridge port N

Is there any particular reason why you choose 0x10 as the base slot for
bridge ports? With the latest DT support patches, mapping this should be
even simpler as we associate a port index with each port and the port
array is gone.

> Also, you need to change the PCI core binding to report as only one
> controller, and probably some other minor fixups related to that.

Yes, that's precisely what I've been doing. I currently have an ugly
TEGRA_PCIE_SINGLE_BUS define that allows me to switch from a one bus
per bridge port to one bus per controller configuration. I think the
above is the missing point to make the latter configuration work. If
I am not mistaken it should also solve a different issue that I've
been seeing regarding the OF node to PCI device matching, since the
busses as instantiated in the current implementation are root busses
without a device attached, and the bridge ports actually appear as
devices on those busses, which is really confusing and messes up any
kind of mapping to OF nodes in DT.

> Then you are a bit closer. You should see both root port bridges
> appear in your lspci.. IIRC the host bridge device is not essential to
> discovery working on Linux.

The second port will probably still not appear, at least not in the
latest patches for DT support since it won't be registered unless
enabled. Even if enabled it will not be registered unless a link is
available, which it isn't in any of the setups that I currently have.
I'll need to check with our hardware engineers whether we can hook
something up to the second port.

>         00:00.0 PCI bridge: NVIDIA Corporation Device 0bf0 (rev a0) (prog-if 00 [Normal decode])
>                 Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed-
> 
> Heh, I wonder WTF that does on a ARM system! On a HT system that takes
> care of mapping PCIe format MSI to HT format MSI..

That's probably part of what Stephen mentioned, with this IP being
derived from a desktop variant where HyperTransport is actually
available.

You've provided a lot of very useful information in this thread and I
have a number of things I can try out now to make this work in a more
compliant way. Thanks!

Thierry
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 836 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20121216/0845e06d/attachment-0001.sig>


More information about the linux-arm-kernel mailing list