[PATCH v2 2/2] treewide: Add the __GFP_PACKED flag to several non-DMA kmalloc() allocations

Catalin Marinas catalin.marinas at arm.com
Wed Nov 2 04:05:54 PDT 2022


On Tue, Nov 01, 2022 at 12:10:51PM -0700, Isaac Manjarres wrote:
> On Tue, Nov 01, 2022 at 06:39:40PM +0100, Christoph Hellwig wrote:
> > On Tue, Nov 01, 2022 at 05:32:14PM +0000, Catalin Marinas wrote:
> > > There's also the case of low-end phones with all RAM below 4GB and arm64
> > > doesn't allocate the swiotlb. Not sure those vendors would go with a
> > > recent kernel anyway.
> > > 
> > > So the need for swiotlb now changes from 32-bit DMA to any DMA
> > > (non-coherent but we can't tell upfront when booting, devices may be
> > > initialised pretty late).
> 
> Not only low-end phones, but there are other form-factors that can fall
> into this category and are also memory constrained (e.g. wearable
> devices), so the memory headroom impact from enabling SWIOTLB might be
> non-negligible for all of these devices. I also think it's feasible for
> those devices to use recent kernels.

Another option I had in mind is to disable this bouncing if there's no
swiotlb buffer, so kmalloc() will return ARCH_DMA_MINALIGN (or the
typically lower cache_line_size()) aligned objects. That's at least
until we find a lighter way to do bouncing. Those devices would work as
before.

> > Yes.  The other option would be to use the dma coherent pool for the
> > bouncing, which must be present on non-coherent systems anyway.  But
> > it would require us to write a new set of bounce buffering routines.
> 
> I think in addition to having to write new bounce buffering routines,
> this approach still suffers the same problem as SWIOTLB, which is that
> the memory for SWIOTLB and/or the dma coherent pool is not reclaimable,
> even when it is not used.

The dma coherent pool at least it has the advantage that its size can be
increased at run-time and we can start with a small one. Not decreased
though, but if really needed I guess it can be added.

We'd also skip some cache maintenance here since the coherent pool is
mapped as non-cacheable already. But to Christoph's point, it does
require some reworking of the current bouncing code.

> There's not enough context in the DMA mapping routines to know if we need
> an atomic allocation, so if we used kmalloc(), instead of SWIOTLB, to
> dynamically allocate memory, it would always have to use GFP_ATOMIC.

I've seen the expression below in a couple of places in the kernel,
though IIUC in_atomic() doesn't always detect atomic contexts:

	gfpflags = (in_atomic() || irqs_disabled()) ? GFP_ATOMIC : GFP_KERNEL;

> But what about having a pool that has a small amount of memory and is
> composed of several objects that can be used for small DMA transfers?
> If the amount of memory in the pool starts falling below a certain
> threshold, there can be a worker thread--so that we don't have to use
> GFP_ATOMIC--that can add more memory to the pool?

If the rate of allocation is high, it may end up calling a slab
allocator directly with GFP_ATOMIC.

The main downside of any memory pool is identifying the original pool in
dma_unmap_*(). We have a simple is_swiotlb_buffer() check looking just
at the bounce buffer boundaries. For the coherent pool we have the more
complex dma_free_from_pool().

With a kmem_cache-based allocator (whether it's behind a mempool or
not), we'd need something like virt_to_cache() and checking whether it
is from our DMA cache. I'm not a big fan of digging into the slab
internals for this. An alternative could be some xarray to remember the
bounced dma_addr.

Anyway, I propose that we try the swiotlb first and look at optimising
it from there, initially using the dma coherent pool.

-- 
Catalin



More information about the linux-arm-kernel mailing list