[PATCH 5/9] PCI: host: brcmstb: add dma-ranges for inbound traffic

Jim Quinlan jim2101024 at gmail.com
Fri Oct 20 08:27:41 PDT 2017


On Fri, Oct 20, 2017 at 10:57 AM, Christoph Hellwig <hch at lst.de> wrote:
> On Fri, Oct 20, 2017 at 10:41:56AM -0400, Jim Quinlan wrote:
>> I am not sure I understand your comment -- the size of the request
>> shouldn't be a factor.  Let's look at your example of the DMA request
>> of 3fffff00 to 4000000f (physical memory).  Lets say it is for 15
>> pages.  If we block out  the last page [0x3ffff000..0x3fffffff] from
>> what is available, there is no 15 page span that can happen across the
>> 0x40000000 boundary.  For SG, there can be no merge that connects a
>> page from one region to another region.  Can you give an example of
>> the scenario you are thinking of?
>
> What prevents a merge from say the regions of
> 0....3fffffff and 40000000....7fffffff?

Huh? [0x3ffff000...x3ffffff] is not available to be used. Drawing from
the original example, we now have to tell Linux that these are now our
effective memory regions:

      memc0-a@[        0....3fffefff] <=> pci@[        0....3fffefff]
      memc0-b@[100000000...13fffefff] <=> pci@[ 40000000....7fffefff]
      memc1-a@[ 40000000....7fffefff] <=> pci@[ 80000000....bfffefff]
      memc1-b@[300000000...33fffefff] <=> pci@[ c0000000....ffffefff]
      memc2-a@[ 80000000....bfffefff] <=> pci@[100000000...13fffefff]
      memc2-b@[c00000000...c3fffffff] <=> pci@[140000000...17fffffff]

This leaves a one-page gap between phsyical memory regions which would
normally be contiguous. One cannot have a dma alloc that spans any two
regions.  This is a drastic step, but I don't see an alternative.
Perhaps  I may be missing what you are saying...



More information about the linux-arm-kernel mailing list