PCI memory mapping

Lucas Stach l.stach at pengutronix.de
Tue Apr 8 02:54:09 PDT 2025


Hi Renaud,

Am Montag, dem 07.04.2025 um 14:55 +0000 schrieb Renaud Barbier:
> Hello,
> Barebox version: 2024-09
> 
> I am porting the Linux PCIE driver for Broadcom Cortex-A9 (ARMv7) chip.
> So far I am able to detect the bridge and NVME device attach to it:
> 
> pci: pci_scan_bus for bus 0
> pci:  last_io = 0x00000000, last_mem = 0x20000000, last_mem_pref = 0x00000000
> pci: class = 00000604, hdr_type = 00000001
> pci: 00:00 [14e4:b170]
> 
> pci: pci_scan_bus for bus 1
> pci:  last_io = 0x00000000, last_mem = 0x20000000, last_mem_pref = 0x00000000
> pci: class = 00000108, hdr_type = 00000000
> pci: 01:00 [126f:2263]
> ERROR: pci: last_mem = 0x20000000, 16384
> pci: pbar0: mask=ffffc004 NP-MEM 16384 bytes
> ...
> pci: class = 00000108, hdr_type = 00000000
> pci: 01:f8 [126f:2263]
> ERROR: pci: last_mem = 0x2007c000, 16384
> pci: pbar0: mask=ffffc004 NP-MEM 16384 bytes
> pci: pci_scan_bus returning with max=02
> pci: bridge NP limit at 0x20100000
> 
I highly doubt that your NVMe device actually occupies all those BDF
addresses.

Either you host driver isn't properly reporting bus timeouts on the
PCI_VENDOR_ID config space access, to make it appear like there are
multiple devices on the bus to the topology walk, or more likely from
the symptoms you report, your host driver doesn't properly set up the
DF part of the BDF for the config space requests. In that case the
first device on the bus may correctly answer to all the config space
requests, which again will make it appear like there are multiple
devices, but the endpoint will actually get configured with the BAR
setup from the last "device" on the bus. If you then try to access the
MMIO space of the first device, there is no EP configured to handle the
request, causing a bus abort.

Regards,
Lucas



More information about the barebox mailing list