[RFC PATCH 00/45] KVM: Arm SMMUv3 driver for pKVM
Tian, Kevin
kevin.tian at intel.com
Wed Feb 1 23:07:55 PST 2023
> From: Jean-Philippe Brucker <jean-philippe at linaro.org>
> Sent: Wednesday, February 1, 2023 8:53 PM
>
> 3. Private I/O page tables
>
> A flexible alternative uses private page tables in the SMMU, entirely
> disconnected from the CPU page tables. With this the SMMU can implement
> a
> reduced set of features, even shed a stage of translation. This also
> provides a virtual I/O address space to the host, which allows more
> efficient memory allocation for large buffers, and for devices with
> limited addressing abilities.
>
> This is the solution implemented in this series. The host creates
> IOVA->HPA mappings with two hypercalls map_pages() and unmap_pages(),
> and
> the hypervisor populates the page tables. Page tables are abstracted into
> IOMMU domains, which allow multiple devices to share the same address
> space. Another four hypercalls, alloc_domain(), attach_dev(), detach_dev()
> and free_domain(), manage the domains.
>
Out of curiosity. Does virtio-iommu fit in this usage? If yes then there is
no need to add specific enlightenment in existing iommu drivers. If no
probably because as mentioned in the start a full-fledged iommu driver
doesn't fit nVHE so lots of smmu driver logic has to be kept in the host?
anyway just want to check your thoughts on the possibility.
btw some of my colleagues are porting pKVM to Intel platform. I believe
they will post their work shortly and there might require some common
framework in pKVM hypervisor like iommu domain, hypercalls, etc. like
what we have in the host iommu subsystem. CC them in case of any early
thought they want to throw in. 😊
Thanks
Kevin
More information about the linux-arm-kernel
mailing list