[PATCH v8 3/9] pci: Introduce pci_register_io_range() helper function.
arnd at arndb.de
Tue Jul 8 00:00:44 PDT 2014
On Tuesday 08 July 2014, Bjorn Helgaas wrote:
> On Tue, Jul 01, 2014 at 07:43:28PM +0100, Liviu Dudau wrote:
> > +static LIST_HEAD(io_range_list);
> > +
> > +/*
> > + * Record the PCI IO range (expressed as CPU physical address + size).
> > + * Return a negative value if an error has occured, zero otherwise
> > + */
> > +int __weak pci_register_io_range(phys_addr_t addr, resource_size_t size)
> I don't understand the interface here. What's the mapping from CPU
> physical address to bus I/O port? For example, I have the following
> machine in mind:
> HWP0002:00: PCI Root Bridge (domain 0000 [bus 00-1b])
> HWP0002:00: memory-mapped IO port space [mem 0xf8010000000-0xf8010000fff]
> HWP0002:00: host bridge window [io 0x0000-0x0fff]
> HWP0002:09: PCI Root Bridge (domain 0001 [bus 00-1b])
> HWP0002:09: memory-mapped IO port space [mem 0xf8110000000-0xf8110000fff]
> HWP0002:09: host bridge window [io 0x1000000-0x1000fff] (PCI address [0x0-0xfff])
> The CPU physical memory [mem 0xf8010000000-0xf8010000fff] is translated by
> the bridge to I/O ports 0x0000-0x0fff on PCI bus 0000:00. Drivers use,
> e.g., "inb(0)" to access it.
> Similarly, [mem 0xf8110000000-0xf8110000fff] is translated by the second
> bridge to I/O ports 0x0000-0x0fff on PCI bus 0001:00. Drivers use
> "inb(0x1000000)" to access it.
I guess you are thinking of the IA64 model here where you keep the virtual
I/O port numbers in a per-bus lookup table that gets accessed for each
inb() call. I've thought about this some more, and I believe there are good
reasons for sticking with the model used on arm32 and powerpc for the
generic OF implementation.
The idea is that there is a single virtual memory range for all I/O port
mappings and we use the MMU to do the translation rather than computing
it manually in the inb() implemnetation. The main advantage is that all
functions used in device drivers to (potentially) access I/O ports
become trivial this way, which helps for code size and in some cases
(e.g. SoC-internal registers with a low latency) it may even be performance
What this scheme gives you is a set of functions that literally do:
/* architecture specific virtual address */
#define PCI_IOBASE (void __iomem *)0xabcd00000000000
static inline u32 inl(unsigned long port)
return readl(port + PCI_IOBASE);
static inline void __iomem *ioport_map(unsigned long port, unsigned int nr)
return port + PCI_IOBASE;
static inline unsigned int ioread32(void __iomem *p)
Since we want this to work on 32-bit machines, the virtual I/O space has
to be rather tightly packed, so Liviu's algorithm just picks the next
available address for each new I/O space.
> pci_register_io_range() seems sort of like it's intended to track the
> memory-mapped IO port spaces, e.g., [mem 0xf8010000000-0xf8010000fff].
> But I would think you'd want to keep track of at least the base port
> number on the PCI bus, too. Or is that why it's weak?
The PCI bus start address only gets factored in when the window is registered
with the PCI core in patch 8/9, where we go over all ranges doing
+ pci_add_resource_offset(resources, res,
+ res->start - range.pci_addr);
With Liviu's patch, this can be done in exactly the same way for both
MMIO and PIO spaces.
> Here's what these look like in /proc/iomem and /proc/ioports (note that
> there are two resource structs for each memory-mapped IO port space: one
> IORESOURCE_MEM for the memory-mapped area (used only by the host bridge
> driver), and one IORESOURCE_IO for the I/O port space (this becomes the
> parent of a region used by a regular device driver):
> PCI Bus 0000:00 I/O Ports 00000000-00000fff
> PCI Bus 0001:00 I/O Ports 01000000-01000fff
> 00000000-00000fff : PCI Bus 0000:00
> 01000000-01000fff : PCI Bus 0001:00
The only difference I'd expect here is that the last line would make it
packed more tightly, so it's instead
00000000-00000fff : PCI Bus 0000:00
00001000-00001fff : PCI Bus 0001:00
In practice we'd probably have 64KB per host controller, and each of them
would be a separate domain. I think we normally don't register the
IORESOURCE_MEM resource, but I agree it's a good idea and we should
always do that.
More information about the linux-arm-kernel