[PATCH v2 04/19] iommufd: Allow pt_id to carry viommu_id for IOMMU_HWPT_ALLOC

Tian, Kevin kevin.tian at intel.com
Thu Sep 26 19:23:16 PDT 2024


> From: Nicolin Chen <nicolinc at nvidia.com>
> Sent: Friday, September 27, 2024 9:26 AM
> 
> On Fri, Sep 27, 2024 at 12:43:16AM +0000, Tian, Kevin wrote:
> > > From: Nicolin Chen <nicolinc at nvidia.com>
> > > Sent: Friday, September 27, 2024 4:11 AM
> > >
> > > On Thu, Sep 26, 2024 at 04:50:46PM +0800, Yi Liu wrote:
> > > > On 2024/8/28 00:59, Nicolin Chen wrote:
> > > > > Now a VIOMMU can wrap a shareable nested parent HWPT. So, it can
> act
> > > like
> > > > > a nested parent HWPT to allocate a nested HWPT.
> > > > >
> > > > > Support that in the IOMMU_HWPT_ALLOC ioctl handler, and update
> its
> > > kdoc.
> > > > >
> > > > > Also, associate a viommu to an allocating nested HWPT.
> > > >
> > > > it still not quite clear to me what vIOMMU obj stands for. Here, it is a
> > > > wrapper of s2 hpwt IIUC. But in the cover letter, vIOMMU obj can
> instanced
> > > > per the vIOMMU units in VM.
> > >
> > > Yea, the implementation in this version is merely a wrapper. I
> > > had a general introduction of vIOMMU in the other reply. And I
> > > will put something similar in the next version of the series,
> > > so the idea would be bigger than a wrapper.
> > >
> > > > Does it mean each vIOMMU of VM can only have
> > > > one s2 HWPT?
> > >
> > > Giving some examples here:
> > >  - If a VM has 1 vIOMMU, there will be 1 vIOMMU object in the
> > >    kernel holding one S2 HWPT.
> > >  - If a VM has 2 vIOMMUs, there will be 2 vIOMMU objects in the
> > >    kernel that can hold two different S2 HWPTs, or share one S2
> > >    HWPT (saving memory).
> > >
> >
> > this is not consistent with previous discussion.
> >
> > even for 1 vIOMMU per VM there could be multiple vIOMMU objects
> > created in the kernel in case the devices connected to the VM-visible
> > vIOMMU locate behind different physical SMMUs.
> >
> > we don't expect one vIOMMU object to span multiple physical ones.
> 
> I think it's consistent, yet we had different perspectives for a
> virtual IOMMU instance in the VM: Jason's suggested design for a
> VM is to have 1-to-1 mapping between virtual IOMMU instances and
> physical IOMMU instances. So, one vIOMMU is backed by one pIOMMU
> only, i.e. one vIOMMU object in the kernel.
> 
> Your case seems to be the model where a VM has one giant virtual
> IOMMU instance backed by multiple physical IOMMUs, in which case
> all the passthrough devices, regardless their associated pIOMMUs,
> are connected to this shared virtual IOMMU. And yes, this shared
> virtual IOMMU can have multiple vIOMMU objects.

yes.

sorry that I should not use "inconsistent" in the last reply. It's more
about completeness for what the design allows. 😊

> 
> Regarding these two models, I had listed their pros/cons at (2):
> https://lore.kernel.org/qemu-
> devel/cover.1719361174.git.nicolinc at nvidia.com/
> 
> (Not 100% sure) VT-d might not have something like vCMDQ, so it
> can stay in the shared model to simplify certain things, though
> I feel it may face some similar situation like mapping multiple
> physical MMIO regions to a single virtual region (undoable!) if
> some day intel has some similar HW-accelerated feature?
> 

yes if VT-d has hw acceleration then it'd be similar to SMMU.


More information about the linux-arm-kernel mailing list