[RFC PATCH v2 00/58] KVM: Arm SMMUv3 driver for pKVM
Jason Gunthorpe
jgg at ziepe.ca
Thu Dec 12 11:41:19 PST 2024
On Thu, Dec 12, 2024 at 06:03:24PM +0000, Mostafa Saleh wrote:
> This series adds a hypervisor driver for the Arm SMMUv3 IOMMU. Since the
> hypervisor part of pKVM (called nVHE here) is minimal, moving the whole
> host SMMU driver into nVHE isn't really an option. It is too large and
> complex and requires infrastructure from all over the kernel. We add a
> reduced nVHE driver that deals with populating the SMMU tables and the
> command queue, and the host driver still deals with probing and some
> initialization.
The cover letter doesn't explain why someone needs page tables in the
guest at all?
If you are able to implement nested support then you can boot the
guest with no-iommu and an effective identity translation through a
hypervisor controlled S2. ie no guest map/unmap. Great DMA
performance.
I thought the point of doing the paravirt here was to allow dynamic
pinning of the guest memory? This is the primary downside with nested.
The entire guest memory has to be pinned down at guest boot.
> 1. Paravirtual I/O page tables
> This is the solution implemented in this series. The host creates
> IOVA->HPA mappings with two hypercalls map_pages() and unmap_pages(), and
> the hypervisor populates the page tables. Page tables are abstracted into
> IOMMU domains, which allow multiple devices to share the same address
> space. Another four hypercalls, alloc_domain(), attach_dev(), detach_dev()
> and free_domain(), manage the domains, the semantics of those hypercalls
> are almost identical to the IOMMU ops which make the kernel driver part
> simpler.
That is re-inventing virtio-iommu. I don't really understand why this
series is hacking up arm-smmuv3 so much, that is not, and should not,
be a paravirt driver. Why not create a clean new pkvm specific driver
for the paravirt?? Or find a way to re-use parts of virtio-iommu?
Shouldn't other arch versions of pkvm be able to re-use the same guest
iommu driver?
> b- Locking: The io-pgtable-arm is lockless under some guarantees of how
> the IOMMU code behaves. However with pKVM, the kernel is not trusted
> and a malicious kernel can issue concurrent requests causing memory
> corruption or UAF, so that it has to be locked in the hypervisor.
? I don't get it, the hypervisor page table has to be private to the
hypervisor. It is not that io-pgtable-arm is lockless, it is that it
relies on a particular kind of caller supplied locking. pkvm's calls
into its private io-pgtable-arm would need pkvm specific locking that
makes sense for it. Where does a malicious guest kernel get into this?
> 2. Nested SMMUv3 translation (with emulation)
> Another approach is to rely on nested translation support which is
> optional in SMMUv3, that requires an architecturally accurate emulation
> of SMMUv3 which can be complicated including cmdq emulation.
The confidential compute folks are going in this direction.
> The trade off between the 2 approaches can be roughly summarised as:
> Paravirtualization:
> - Compatible with more HW (and IOMMUs).
> - Better DMA performance due to shorter table walks/less TLB pressure
> - Needs extra complexity to squeeze the last bit of optimization (around
> unmap, and map_sg).
It has better straight line DMA performance if the DMAs are all
static. Generally much, much worse performance if the DMAs are
dynamically mapped as you have to trap so much stuff.
The other negative is there is no way to get SVA support with
para-virtualization.
The positive is you don't have to pin the VM's memory.
> Nested Emulation
> - Faster map_pages (not sure about unmap because it requires cmdq
> emulation for TLB invalidation if DVM not used).
If you can do nested then you can run in identity mode and then you
don't have any performance down side. It is a complete win.
Even if you do non-idenity nested is still likely faster for changing
translation than paravirt approaches. A single cmdq range invalidate
should be about the same broad overhead as a single paravirt call to
unmap except they can be batched under load.
Things like vCMDQ eliminate this overhead entirely, to my mind that is
the future direction of this HW as you obviously need to HW optimize
invalidation...
> - Needs extra complexity for architecturally emulating SMMUv3.
Lots of people have now done this, it is not really so bad. In
exchange you get a full architected feature set, better performance,
and are ready for HW optimizations.
> - Add IDENTITY_DOMAIN support, I already have some patches for that, but
> didn’t want to complicate this series, I can send them separately.
This seems kind of pointless to me. If you can tolerate identity (ie
pin all memory) then do nested, and maybe don't even bother with a
guest iommu.
If you want most of the guest memory to be swappable/movable/whatever
then paravirt is the only choice, and you really don't want the guest
to have any identiy support at all.
Really, I think you'd want to have both options, there is no "best"
here. It depends what people want to use the VM for.
My advice for merging would be to start with the pkvm side setting up
a fully pinned S2 and do not have a guest driver. Nesting without
emulating smmuv3. Basically you get protected identity DMA support. I
think that would be a much less sprawling patch series. From there it
would be well positioned to add both smmuv3 emulation and a paravirt
iommu flow.
Jason
More information about the linux-arm-kernel
mailing list