[PATCH 0/2] iommu: Allow passing custom allocators to pgtable drivers

Rob Clark robdclark at gmail.com
Mon Oct 23 14:02:10 PDT 2023


On Wed, Sep 20, 2023 at 6:12 AM Steven Price <steven.price at arm.com> wrote:
>
> On 09/08/2023 13:17, Boris Brezillon wrote:
> > Hello,
> >
> > This patchset is an attempt at making page table allocation
> > customizable. This is useful to some GPU drivers for various reasons:
> >
> > - speed-up upcoming page table allocations by managing a pool of free
> >   pages
> > - batch page table allocation instead of allocating one page at a time
> > - pre-reserve pages for page tables needed for map/unmap operations and
> >   return the unused page tables to some pool
> >
> > The first and last reasons are particularly important for GPU drivers
> > wanting to implement asynchronous VM_BIND. Asynchronous VM_BIND requires
> > that any page table needed for a map/unmap operation to succeed be
> > allocated at VM_BIND job creation time. At the time of the job creation,
> > we don't know what the VM will look like when we get to execute the
> > map/unmap, and can't guess how many page tables we will need. Because
> > of that, we have to over-provision page tables for the worst case
> > scenario (page table tree is empty), which means we will allocate/free
> > a lot. Having pool a pool of free pages is crucial if we want to
> > speed-up VM_BIND requests.
> >
> > A real example of how such custom allocators can be used is available
> > here[1]. v2 of the Panthor driver is approaching submission, and I
> > figured I'd try to upstream the dependencies separately, which is
> > why I submit this series now, even though the user of this new API
> > will come afterwards. If you'd prefer to have those patches submitted
> > along with the Panthor driver, let me know.
> >
> > This approach has been discussed with Robin, and is hopefully not too
> > far from what he had in mind.
>
> The alternative would be to embed a cache of pages into the IOMMU
> framework, however kmem_cache sadly doesn't seem to support the
> 'reserve' of pages concept that we need. mempools could be a solution
> but the mempool would need to be created by the IOMMU framework as the
> alloc/free functions are specified when creating the pool. So it would
> be a much bigger change (to drivers/iommu).
>
> So, given that so far it's just Panthor this seems like the right
> approach for now - when/if other drivers want the same functionality
> then it might make sense to revisit the idea of doing the caching within
> the IOMMU framework.

I have some plans to use this as well for drm/msm.. but the reasons
and requirements are basically the same as for panthor.  I think I
prefer the custom allocator approach, rather than tying this to IOMMU
framework.  (But ofc custom allocators, I guess, does not prevent the
iommu driver from doing it's own caching.)

BR,
-R

> Robin: Does this approach sound sensible?
>
> FWIW:
>
> Reviewed-by: Steven Price <steven.price at arm.com>
>
> Steve
>
> > Regards,
> >
> > Boris
> >
> > [1]https://gitlab.freedesktop.org/panfrost/linux/-/blob/panthor/drivers/gpu/drm/panthor/panthor_mmu.c#L441
> >
> > Boris Brezillon (2):
> >   iommu: Allow passing custom allocators to pgtable drivers
> >   iommu: Extend LPAE page table format to support custom allocators
> >
> >  drivers/iommu/io-pgtable-arm.c | 50 +++++++++++++++++++++++-----------
> >  drivers/iommu/io-pgtable.c     | 31 +++++++++++++++++++++
> >  include/linux/io-pgtable.h     | 21 ++++++++++++++
> >  3 files changed, 86 insertions(+), 16 deletions(-)
> >
>



More information about the linux-arm-kernel mailing list