[RFC 3/3] mm: iommu: The Virtual Contiguous Memory Manager

Zach Pfeffer zpfeffer at codeaurora.org
Fri Jul 2 02:17:34 EDT 2010


Andi Kleen wrote:
>> The VCMM provides a more abstract, global view with finer-grained
>> control of each mapping a user wants to create. For instance, the
>> semantics of iommu_map preclude its use in setting up just the IOMMU
>> side of a mapping. With a one-sided map, two IOMMU devices can be
> 
> Hmm? dma_map_* does not change any CPU mappings. It only sets up
> DMA mapping(s).

Sure, but I was saying that iommu_map() doesn't just set up the IOMMU
mappings, its sets up both the iommu and kernel buffer mappings.

> 
>> Additionally, the current IOMMU interface does not allow users to
>> associate one page table with multiple IOMMUs unless the user explicitly
> 
> That assumes that all the IOMMUs on the system support the same page table
> format, right?

Actually no. Since the VCMM abstracts a page-table as a Virtual
Contiguous Region (VCM) a VCM can be associated with any device,
regardless of their individual page table format.

> 
> As I understand your approach would help if you have different
> IOMMus with an different low level interface, which just 
> happen to have the same pte format. Is that very likely?
> 
> I would assume if you have lots of copies of the same IOMMU
> in the system then you could just use a single driver with multiple
> instances that share some state for all of them.  That model
> would fit in the current interfaces. There's no reason multiple
> instances couldn't share the same allocation data structure.
> 
> And if you have lots of truly different IOMMUs then they likely
> won't be able to share PTEs at the hardware level anyways, because
> the formats are too different.

See VCM's above.

> 
>> The VCMM takes the long view. Its designed for a future in which the
>> number of IOMMUs will go up and the ways in which these IOMMUs are
>> composed will vary from system to system, and may vary at
>> runtime. Already, there are ~20 different IOMMU map implementations in
>> the kernel. Had the Linux kernel had the VCMM, many of those
>> implementations could have leveraged the mapping and topology management
>> of a VCMM, while focusing on a few key hardware specific functions (map
>> this physical address, program the page table base register).
> 
> The standard Linux approach to such a problem is to write
> a library that drivers can use for common functionality, not put a middle 
> layer in between. Libraries are much more flexible than layers.

That's true up to the, "is this middle layer so useful that its worth
it" point. The VM is a middle layer, you could make the same argument
about it, "the mapping code isn't too hard, just map in the memory
that you need and be done with it". But the VM middle layer provides a
clean separation between page frames and pages which turns out to be
infinitely useful. The VCMM is built in the same spirit, It says
things like, "mapping is a global problem, I'm going to abstract
entire virtual spaces and allow people arbitrary chuck size
allocation, I'm not going to care that my device is physically mapping
this buffer and this other device is a virtual, virtual device."

> 
> That said I'm not sure there's all that much duplicated code anyways.
> A lot of the code is always IOMMU specific. The only piece
> which might be shareable is the mapping allocation, but I don't
> think that's very much of a typical driver
> 
> In my old pci-gart driver the allocation was all only a few lines of code, 
> although given it was somewhat dumb in this regard because it only managed a 
> small remapping window.

I agree that its not a lot of code, and that this layer may be a bit heavy, but I'd like to focus on is a global mapping view useful and if so is something like the graph management that the VCMM provides generally useful.

-- 
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum.



More information about the linux-arm-kernel mailing list