[PATCH 5/9] PCI: host: brcmstb: add dma-ranges for inbound traffic
Christoph Hellwig
hch at lst.de
Thu Oct 19 02:16:44 PDT 2017
On Wed, Oct 18, 2017 at 10:41:17AM -0400, Jim Quinlan wrote:
> That's what brcm_to_{pci,cpu} are for -- they keep a list of the
> dma-ranges given in the PCIe DT node, and translate from system memory
> addresses to pci-space addresses, and vice versa. As long as people
> are using the DMA API it should work. It works for all of the ARM,
> ARM64, and MIPS Broadcom systems I've tested, using eight different EP
> devices. Note that I am not thrilled to be advocating this mechanism
> but it seemed the best alternative.
Say we are using your original example ranges:
memc0-a@[ 0....3fffffff] <=> pci@[ 0....3fffffff]
memc0-b@[100000000...13fffffff] <=> pci@[ 40000000....7fffffff]
memc1-a@[ 40000000....7fffffff] <=> pci@[ 80000000....bfffffff]
memc1-b@[300000000...33fffffff] <=> pci@[ c0000000....ffffffff]
memc2-a@[ 80000000....bfffffff] <=> pci@[100000000...13fffffff]
memc2-b@[c00000000...c3fffffff] <=> pci@[140000000...17fffffff]
and now you get a dma mapping request for physical addresses
3fffff00 to 4000000f, which would span two of your ranges. How
is this going to work?
> I would prefer that the same code work for all three architectures.
> What I would like from ARM/ARM64 is the ability to override
> phys_to_dma() and dma_to_phys(); I thought the chances of that being
> accepted would be slim. But you are right, I should ask the
> maintainers.
It is still better than trying to stack dma ops, which is a receipe
for problems down the road.
More information about the linux-arm-kernel
mailing list