PCIE on LS1021A
Renaud Barbier
Renaud.Barbier at ametek.com
Tue Feb 3 05:06:06 PST 2026
I have a patch ready.
As I was not familiar with the difficulty of supporting/debugging MMU and needing a quick around, the file mmu_32.c has been duplicated into mmu_lpae.c.
With so many similarities between the two files, I am pretty sure you were expecting an update of mmu_32.c with LPAE support.
Please advise, if you like to receive the patch or wait a bit longer.
Cheers,
Renaud
> -----Original Message-----
> From: Ahmad Fatoum <a.fatoum at pengutronix.de>
> Sent: 02 February 2026 10:14
> To: Renaud Barbier <Renaud.Barbier at ametek.com>; Barebox List
> <barebox at lists.infradead.org>
> Cc: Lucas Stach <lst at pengutronix.de>
> Subject: Re: PCIE on LS1021A
>
> ***NOTICE*** This came from an external source. Use caution when
> replying, clicking links, or opening attachments.
>
> Hello Renaud,
>
> On 2/2/26 10:57 AM, Renaud Barbier wrote:
> > From the head of next, I got the MMU with LPAE support to work.
> > I can prepare a patch for the MMU LPAE support and later a patch for the
> LS1021A PCIE support.
> > I have not tested the code on QEMU yet.
> >
> > Do you require the code to be tested in QEMU before I send it?
>
> We'll want to test the LPAE case in CI, so it doesn't bitrot over time.
>
> I can help with the QEMU integration, for v1, just make sure that a single
> user visible CONFIG_ARM_LPAE enables it and if it's disabled, behavior is
> unmodified.
>
> Cheers,
> Ahmad
>
> >
> >> -----Original Message-----
> >> From: barebox <barebox-bounces at lists.infradead.org> On Behalf Of
> >> Renaud Barbier
> >> Sent: 28 January 2026 16:40
> >> To: Ahmad Fatoum <a.fatoum at pengutronix.de>; Barebox List
> >> <barebox at lists.infradead.org>
> >> Cc: Lucas Stach <lst at pengutronix.de>
> >> Subject: RE: PCIE on LS1021A
> >>
> >> ***NOTICE*** This came from an external source. Use caution when
> >> replying, clicking links, or opening attachments.
> >>
> >> Just to let you know I was developing from barebox 2024.09 as this
> >> was a requirement for our product.
> >> I started to move the LPAE support and follow the next branch.
> >> Barebox is booting but currently failing to probe the PCIe NVME device.
> >>
> >> A bit more debugging and hopefully, I can get something soon.
> >>
> >>> -----Original Message-----
> >>> From: Ahmad Fatoum <a.fatoum at pengutronix.de>
> >>> Sent: 20 January 2026 13:41
> >>> To: Renaud Barbier <Renaud.Barbier at ametek.com>; Barebox List
> >>> <barebox at lists.infradead.org>
> >>> Cc: Lucas Stach <lst at pengutronix.de>
> >>> Subject: Re: PCIE on LS1021A
> >>>
> >>> ***NOTICE*** This came from an external source. Use caution when
> >>> replying, clicking links, or opening attachments.
> >>>
> >>> Hello Renaud,
> >>>
> >>> On 1/13/26 7:26 PM, Renaud Barbier wrote:
> >>>> Changing the NVME to the PCIe2 bus and fixing a few things in the
> >>>> MMU
> >>> support, I am now able to detect the NVME:
> >>>>
> >>>> nvme pci-126f:2263.0: serial: A012410180629000000 nvme
> >>>> pci-126f:2263.0: model: SM681GEF AGS nvme pci-126f:2263.0:
> firmware:
> >>>> TFX7GB
> >>>>
> >>>> barebox:/ ls /dev/nvme0n1
> >>>> barebox:/ ls /dev/nvme0n1*
> >>>> /dev/nvme0n1 /dev/nvme0n1.0
> >>>> /dev/nvme0n1.1 /dev/nvme0n1.2
> >>>> /dev/nvme0n1.3 /dev/nvme0n1.4
> >>>> ...
> >>>>
> >>>> Thanks to the following remapping:
> >>>> /* PCIe1 Config and memory area remapping */
> >>>> map_io_sections(0x4000000000ULL, IOMEM(0x24000000), 192 << 20);
> >> /*
> >>>> PCIE1 conf space */ //map_io_sections(0x4040000000ULL,
> >>>> IOMEM(0x40000000), 128 << 20); /* PCIE1 mem space */
> >>>>
> >>>> /* PCIe2 Config and memory area remapping */
> >>>> map_io_sections(0x4800000000ULL, IOMEM(0x34000000), 192 << 20);
> >> /*
> >>>> PCIe2 config space */ map_io_sections(0x4840000000ULL,
> >>>> IOMEM(0x50000000), 128 << 20); /* PCIE2 mem space */
> >>>>
> >>>> For some reason, I had to comment out the remapping of the PCIe1
> >>>> MEM
> >>> space as the system hangs just after detecting the NVME device.
> >>>> The PCIe1 device node is not even enabled.
> >>>> If you have a clue, let me know.
> >>>
> >>> I don't have an idea off the top of my head sorry.
> >>> If you have something roughly working, it would be good if you could
> >>> check it works with qemu-system-arm -M virt,highmem=on and send an
> >>> initial patch series?
> >>>
> >>> Cheers,
> >>> Ahmad
> >>>
> >>>>
> >>>> Cheers,
> >>>> Renaud
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>> -----Original Message-----
> >>>>> From: barebox <barebox-bounces at lists.infradead.org> On Behalf Of
> >>>>> Renaud Barbier
> >>>>> Sent: 07 January 2026 09:44
> >>>>> To: Ahmad Fatoum <a.fatoum at pengutronix.de>; Barebox List
> >>>>> <barebox at lists.infradead.org>
> >>>>> Cc: Lucas Stach <lst at pengutronix.de>
> >>>>> Subject: RE: PCIE on LS1021A
> >>>>>
> >>>>> ***NOTICE*** This came from an external source. Use caution when
> >>>>> replying, clicking links, or opening attachments.
> >>>>>
> >>>>> Based on your information and U-boot and I have started to work on
> >>>>> the LPAE support. So far full of debugging and hacks.
> >>>>>
> >>>>> It is based on the mmu_32.c file. As I have failed to use the 3
> >>>>> MMU tables, at present I am using only 2 as in u-boot.
> >>>>> The 64-bit PCI space is remapped with:
> >>>>> map_io_sections(0x4000000000ULL ,IOMEM(0x24000000UL), 192 <<
> >> 20);
> >>>>>
> >>>>> To detect the NVME device, the virtulal address 0x24000000 is
> >>>>> hard-coded into the functions dw_pcie_[wr|rd]_other_conf of
> >>>>> drivers/pci/pcie- designware-host.c as follows:
> >>>>> if (bus->primary == pp->root_bus_nr) {
> >>>>> type = PCIE_ATU_TYPE_CFG0;
> >>>>> cpu_addr = pp->cfg0_base;
> >>>>> cfg_size = pp->cfg0_size;
> >>>>> pp->va_cfg0_base = IOMEM(0x24000000); /* XXX */
> >>>>> va_cfg_base = pp->va_cfg0_base;
> >>>>>
> >>>>> What is the method to pass the address to the driver?
> >>>>>
> >>>>> And I get the following:
> >>>>> layerscape-pcie 3400000.pcie at 3400000.of: host bridge
> >>>>> /soc/pcie at 3400000
> >>>>> ranges:
> >>>>> layerscape-pcie 3400000.pcie at 3400000.of: Parsing ranges property...
> >>>>> layerscape-pcie 3400000.pcie at 3400000.of: IO
> >>>>> 0x4000010000..0x400001ffff -> 0x0000000000
> >>>>> layerscape-pcie 3400000.pcie at 3400000.of: MEM
> >>>>> 0x4040000000..0x407fffffff -> 0x0040000000
> >>>>>
> >>>>> ERROR: io_bus_addr = 0x0, io_base = 0x4000010000
> >>>>> ERROR: mem_bus_addr = 0x4040000000 -> Based on Linux output,
> the
> >>>>> mem_bus_addr should be as above 0x4000.0000 to be programmed
> in
> >>> the
> >>>>> ATU target register.
> >>>>> ERROR: mem_base = 0x4040000000, offset = 0x0
> >>>>>
> >>>>> ERROR: layerscape-pcie 3400000.pcie at 3400000.of: iATU unroll:
> >>>>> disabled
> >>>>>
> >>>>> pci: pci_scan_bus for bus 0
> >>>>> pci: last_io = 0x00010000, last_mem = 0x40000000, last_mem_pref
> =
> >>>>> 0x00000000
> >>>>> pci: class = 00000604, hdr_type = 00000001
> >>>>> pci: 00:00 [1957:0e0a]
> >>>>> pci: pci_scan_bus for bus 1
> >>>>> pci: last_io = 0x00010000, last_mem = 0x40000000, last_mem_pref
> =
> >>>>> 0x00000000
> >>>>>
> >>>>> pci: class = 00000108, hdr_type = 00000000
> >>>>> pci: 01:00 [126f:2263] -> NVME device found
> >>>>> pci: pbar0: mask=ffffc004 NP-MEM 16384 bytes
> >>>>> ERROR: pci: &&& sub = 0x2263, 0x126f kind = NP-MEM&&&
> >>>>> ERROR: pci: &&& write BAR 0x10 = 0x40000000 &&& ...
> >>>>> pci: pci_scan_bus returning with max=02
> >>>>> pci: bridge NP limit at 0x40100000
> >>>>> pci: bridge IO limit at 0x00010000
> >>>>> pci: pbar0: mask=ff000000 NP-MEM 16777216 bytes
> >>>>> pci: pbar1: mask=fc000000 NP-MEM 67108864 bytes
> >>>>> pci: pci_scan_bus returning with max=02
> >>>>> ERROR: nvme pci-126f:2263.0: enabling bus mastering
> >>>>>
> >>>>> Then, the system hangs on the instruction 3 lines below:
> >>>>> ERROR: nvme_pci_enable : 0x4000001c -> Fails to access the NVME
> >>>>> CSTS register. It does not matter if mem_bus_addr is set to
> >>>>> 0x4000.0000 to program the ATU to translate the address
> >>>>> 0x40.4000.0000 to
> >>> 0x4000.0000.
> >>>>> if (readl(dev->bar + NVME_REG_CSTS) == -1)
> >>>>>
> >>>>> 0x4000.0000 is also the quadSPI memory area. So I guess I should
> >>>>> remap the access too.
> >>>>>
> >>>>> Unhappily, my work is now at a stop as there is a hardware failure
> >>>>> on my system.
> >>>>>
> >>>>> Note: the MMU may not be set properly as the out of-band fails to
> >>>>> on TX timeout. I can reach the prompt after the NVME probing failed.
> >>>>>
> >>>>>
> >>>>>
> >>>>
> >>>
> >>> --
> >>> Pengutronix e.K. | |
> >>> Steuerwalder Str. 21 |
> >>>
> >>
> https://urldefense.com/v3/__http://www.pengutronix.de/__;!!HKOSU0g!Dl
> >> Z
> >>>
> >>
> b2oy6FdvgOu3JutuBMr0zf4ib6x_vlFyfBU3Fgcpgud4iuzA7FLewuR6dBQULYVe
> >>> xgDvoQqAlgtgyY1fAds9Tovg$ |
> >>> 31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
> >>> Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
> >
>
> --
> Pengutronix e.K. | |
> Steuerwalder Str. 21 |
> https://urldefense.com/v3/__http://www.pengutronix.de/__;!!HKOSU0g!GF
> VpvSNJ7ZpBEARsSI_zI5NAW7-
> aaRK9UC_uDdJPcEQlD0S2hzfVDc2gc4yT1rTBqbsuz4_Qxy7lu2QxzSd0j2eqeL0$
> |
> 31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
> Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
More information about the barebox
mailing list