[PATCH v2 0/2] iommu: Allow passing custom allocators to pgtable drivers

Jason Gunthorpe jgg at nvidia.com
Fri Nov 10 11:42:15 PST 2023


On Fri, Nov 10, 2023 at 08:16:52PM +0100, Boris Brezillon wrote:
> On Fri, 10 Nov 2023 12:12:29 -0400
> Jason Gunthorpe <jgg at nvidia.com> wrote:
> 
> > On Fri, Nov 10, 2023 at 04:48:09PM +0100, Boris Brezillon wrote:
> > 
> > > > Shouldn't improving the allocator in the io page table be done
> > > > generically?  
> > > 
> > > While most of it could be made generic, the pre-reservation is a bit
> > > special for VM_BIND: we need to pre-reserve page tables without knowing
> > > the state of the page table tree (over-reservation), because page table
> > > updates are executed asynchronously (the state of the VM when we
> > > prepare the request might differ from its state when we execute it). We
> > > also need to make sure no other pre-reservation requests steal pages
> > > from the pool of pages we reserved for requests that were not executed
> > > yet.
> > > 
> > > I'm not saying this is impossible to implement, but it sounds too
> > > specific for a generic io-pgtable cache.  
> > 
> > It is quite easy, and indeed much better to do it internally.
> > 
> > struct page allocations like the io page table uses get a few pointers
> > of data to be used by the caller in the struct page *.
> 
> Ah, right. I didn't even consider that given how volatile page fields
> are (not even sure which ones we're allowed to used for private data
> tbh).

It is much more orderly now, eg look at the slab and net folio
conversions

> > You can put a refcounter in that data per-page to count how many
> > callers have reserved the page. Add a new "allocate VA" API to
> > allocate and install page table levels that cover a VA range in the
> > radix tree and increment all the refcounts on all the impacted struct
> > pages.
> 
> I like the general idea, but it starts to get tricky when:
> 
> 1. you have a page table format supporting a dynamic number of levels.
> For instance, on ARM MMUs, you can get rid of the last level if you
> have portions of your buffer that are physically contiguous and aligned
> on the upper PTE granularity (and the VA is aligned too, of course).
> I'm assuming we want to optimize mem consumption by merging physically
> contiguous regions in that case. If we accept to keep a static
> granularity, there should be no issue.

If the last level(s) get chopped you'd have to stick the pages into a
linked list instead of freeing it, yes.

> 2. your future MMU requests are unordered. That's the case for
> VM_BIND, if you have multiple async queues, or if you want to fast
> track synchronous requests.

Don't really understand this?
 
> In that case, I guess we can keep the leaf page tables around until all
> pending requests have been executed, and get rid of them if we have no
> remaining users at the end.

I assumed you preallocated a IOVA window at some point and then the
BIND is just changing the mapping. The IOVA allocation would pin down
all the radix tree memory so that that any map in the preallocated
IOVA range cannot fail.

> > Now you can be guarenteed that future map in that VA range will be
> > fully non-allocating, and future unmap will be fully non-freeing.
> 
> You mean fully non-freeing if there are no known remaining users to
> come, right?

unmap of allocated IOVA would be non-freeing. Free would happen on
allocate

> > A new domain API to prepare all the ioptes more efficiently would be a
> > great general improvement!
> 
> If there are incentives to get this caching mechanism up and running,
> I'm happy to help in any way you think would be useful, but I'd really
> like to have a temporary solution until we have this solution ready.
> Given custom allocators seem to be useful for other use cases, I'm
> tempted to get it merged, and I'll happily port panthor to the new
> caching system when it's ready.

My experience with GPU land is these hacky temporary things become
permanent and then a total pain for everyone else :( By the time
someone comes to fix it you will be gone and nobody will be willing to
help do changes to the GPU driver.

Jason



More information about the linux-arm-kernel mailing list