PCI memory mapping
Renaud Barbier
Renaud.Barbier at ametek.com
Wed Apr 9 03:00:12 PDT 2025
For information, I had a look at the Linux (6.11.3) PCI driver , the PCI bus (PCIe in my case) probing in drivers/pci/probe.c is limited by this function due to PCIe specification.
static int only_one_child(struct pci_bus *bus)
{
struct pci_dev *bridge = bus->self;
/*
* Systems with unusual topologies set PCI_SCAN_ALL_PCIE_DEVS so
* we scan for all possible devices, not just Device 0.
*/
if (pci_has_flag(PCI_SCAN_ALL_PCIE_DEVS))
return 0;
/*
* A PCIe Downstream Port normally leads to a Link with only Device
* 0 on it (PCIe spec r3.1, sec 7.3.1). As an optimization, scan
* only for Device 0 in that situation.
*/
if (bridge && pci_is_pcie(bridge) && pcie_downstream_port(bridge))
return 1;
return 0;
}
int pci_scan_slot(struct pci_bus *bus, int devfn)
{
struct pci_dev *dev;
int fn = 0, nr = 0;
~ if (only_one_child(bus) && (devfn > 0)) {
+ pr_err("XXX: %s one child devfn = %d\n", __func__, devfn);
return 0; /* Already scanned the entire slot */
+ }
...
> -----Original Message-----
> From: Lucas Stach <l.stach at pengutronix.de>
> Sent: 08 April 2025 10:54
> To: Renaud Barbier <Renaud.Barbier at ametek.com>; Barebox List
> <barebox at lists.infradead.org>
> Subject: Re: PCI memory mapping
>
> ***NOTICE*** This came from an external source. Use caution when replying,
> clicking links, or opening attachments.
>
> Hi Renaud,
>
> Am Montag, dem 07.04.2025 um 14:55 +0000 schrieb Renaud Barbier:
> > Hello,
> > Barebox version: 2024-09
> >
> > I am porting the Linux PCIE driver for Broadcom Cortex-A9 (ARMv7) chip.
> > So far I am able to detect the bridge and NVME device attach to it:
> >
> > pci: pci_scan_bus for bus 0
> > pci: last_io = 0x00000000, last_mem = 0x20000000, last_mem_pref =
> > 0x00000000
> > pci: class = 00000604, hdr_type = 00000001
> > pci: 00:00 [14e4:b170]
> >
> > pci: pci_scan_bus for bus 1
> > pci: last_io = 0x00000000, last_mem = 0x20000000, last_mem_pref =
> > 0x00000000
> > pci: class = 00000108, hdr_type = 00000000
> > pci: 01:00 [126f:2263]
> > ERROR: pci: last_mem = 0x20000000, 16384
> > pci: pbar0: mask=ffffc004 NP-MEM 16384 bytes ...
> > pci: class = 00000108, hdr_type = 00000000
> > pci: 01:f8 [126f:2263]
> > ERROR: pci: last_mem = 0x2007c000, 16384
> > pci: pbar0: mask=ffffc004 NP-MEM 16384 bytes
> > pci: pci_scan_bus returning with max=02
> > pci: bridge NP limit at 0x20100000
> >
> I highly doubt that your NVMe device actually occupies all those BDF
> addresses.
>
> Either you host driver isn't properly reporting bus timeouts on the
> PCI_VENDOR_ID config space access, to make it appear like there are multiple
> devices on the bus to the topology walk, or more likely from the symptoms
> you report, your host driver doesn't properly set up the DF part of the BDF for
> the config space requests. In that case the first device on the bus may correctly
> answer to all the config space requests, which again will make it appear like
> there are multiple devices, but the endpoint will actually get configured with
> the BAR setup from the last "device" on the bus. If you then try to access the
> MMIO space of the first device, there is no EP configured to handle the
> request, causing a bus abort.
>
> Regards,
> Lucas
More information about the barebox
mailing list