dma-pool fixes
Nicolas Saenz Julienne
nsaenzjulienne at suse.de
Fri Jul 31 06:47:12 EDT 2020
Hi Amit,
On Wed, 2020-07-29 at 17:52 +0530, Amit Pundir wrote:
> On Wed, 29 Jul 2020 at 16:15, Nicolas Saenz Julienne
> <nsaenzjulienne at suse.de> wrote:
> > On Tue, 2020-07-28 at 17:30 +0200, Christoph Hellwig wrote:
> > > On Tue, Jul 28, 2020 at 06:18:41PM +0530, Amit Pundir wrote:
> > > > > Oh well, this leaves me confused again. It looks like your setup
> > > > > really needs a CMA in zone normal for the dma or dma32 pool.
> > > >
> > > > Anything I should look up in the downstream kernel/dts?
> > >
> > > I don't have a good idea right now. Nicolas, can you think of something
> > > else?
> >
> > To summarise, the device is:
> > - Using the dma-direct code path.
> > - Requesting ZONE_DMA memory to then fail when provided memory falls in
> > ZONE_DMA. Actually, the only acceptable memory comes from CMA, which is
> > located topmost of the 4GB boundary.
> >
> > My wild guess is that we may be abusing an iommu identity mapping setup by
> > firmware.
> >
> > That said, what would be helpful to me is to find out the troublesome device.
> > Amit, could you try adding this patch along with Christoph's modified series
> > (so the board boots). Ultimately DMA atomic allocations are not that common, so
> > we should get only a few hits:
>
> Hi, still not hitting dma_alloc_from_pool().
Sorry I insisted, but not hitting the atomic path makes the issue even harder
to understand.
> I hit the following direct alloc path only once, at starting:
>
> dma_alloc_coherent ()
> -> dma_alloc_attrs()
> -> dma_is_direct() -> dma_direct_alloc()
> -> dma_direct_alloc_pages()
> -> dma_should_alloc_from_pool() #returns FALSE from here
>
> After that I'm hitting following iommu dma alloc path all the time:
>
> dma_alloc_coherent()
> -> dma_alloc_attrs()
> -> (ops->alloc) -> iommu_dma_alloc()
> -> iommu_dma_alloc_remap() #always returns from here
>
> So dma_alloc_from_pool() is not getting called at all in either of the
> above cases.
Ok, so lets see who's doing what and with what constraints:
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 4959f5df21bd..d28b3e4b91d3 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -594,6 +594,9 @@ static void *iommu_dma_alloc_remap(struct device *dev, size_t size,
dma_addr_t iova;
void *vaddr;
+ dev_info(dev, "%s, bus_dma_limit %llx, dma_mask %llx, coherent_dma_mask %llx, in irq %lu, size %lu, gfp %x, attrs %lx\n",
+ __func__, dev->bus_dma_limit, *dev->dma_mask, dev->coherent_dma_mask, in_interrupt(), size, gfp, attrs);
+
*dma_handle = DMA_MAPPING_ERROR;
if (unlikely(iommu_dma_deferred_attach(dev, domain)))
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index bb0041e99659..e5474e709e7b 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -160,6 +160,9 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size,
size = PAGE_ALIGN(size);
+ dev_info(dev, "%s, bus_dma_limit %llx, dma_mask %llx, coherent_dma_mask %llx, in irq %lu, size %lu, gfp %x, attrs %lx\n",
+ __func__, dev->bus_dma_limit, *dev->dma_mask, dev->coherent_dma_mask, in_interrupt(), size, gfp, attrs);
+
if (dma_should_alloc_from_pool(dev, gfp, attrs)) {
ret = dma_alloc_from_pool(dev, size, &page, gfp);
if (!ret)
More information about the linux-rpi-kernel
mailing list