[PATCH RFCv1 08/14] iommufd: Add IOMMU_VIOMMU_SET_DEV_ID ioctl

Tian, Kevin kevin.tian at intel.com
Fri May 24 00:13:23 PDT 2024


> From: Nicolin Chen <nicolinc at nvidia.com>
> Sent: Friday, May 24, 2024 1:40 PM
> 
> On Thu, May 23, 2024 at 06:42:56AM +0000, Tian, Kevin wrote:
> > btw there is a check in the following code:
> >
> > +       if (viommu->iommu_dev != idev->dev->iommu->iommu_dev) {
> > +               rc = -EINVAL;
> > +               goto out_put_viommu;
> > +       }
> >
> > I vaguely remember an earlier discussion about multiple vSMMU instances
> > following the physical SMMU topology, but don't quite recall the exact
> > reason.
> >
> > What is the actual technical obstacle prohibiting one to put multiple
> > VCMDQ's from different SMMU's into one vIOMMU instance?
> 
> Because a VCMDQ passthrough involves a direct mmap of a HW MMIO
> page to the guest-level MMIO region. The MMIO page provides the
> read/write of queue's head and tail indexes.
> 
> With a single pSMMU and a single vSMMU, it's simply 1:1 mapping.
> 
> With a multi-pSMMU and a single vSMMU, the single vSMMU will see
> one guest-level MMIO region backed by multiple physical pages.
> Since we are talking about MMIO, technically it cannot select the
> corresponding MMIO page to the device, not to mention we don't
> really want VMM to involve, i.e. no VM exist, when using VCMDQ.

can a vSMMU report to support multiple CMDQ's then there are
several MMIO regions each mapped to a different backend VCMDQ?

but I guess even if it's supported there is still a problem describing
the association between assigned devices and the CMDQ's of the
single vIOMMU instance. On bare metal a feature of multiple CMDQ
is more for load-balancing so there won't be a fixed association
between CMDQ/device. But the fixed association exists if we mixes
multiple VCMDQ's in a single vIOMMU instance then there may
lack of a way in spec to describe it.

> 
> So, there must be some kind of multi-instanced carriers to hold
> those MMIO pages, by attaching devices behind different pSMMUs to
> corresponding carriers. And today we have VIOMMU as the carrier.
> 
> One step back, even without VCMDQ feature, a multi-pSMMU setup
> will have multiple viommus (with our latest design) being added
> to a viommu list of a single vSMMU's. Yet, vSMMU in this case
> always traps regular SMMU CMDQ, so it can do viommu selection
> or even broadcast (if it has to).
> 

I'm curious to learn the real reason of that design. Is it because you
want to do certain load-balance between viommu's or due to other
reasons in the kernel smmuv3 driver which e.g. cannot support a
viommu spanning multiple pSMMU?



More information about the linux-arm-kernel mailing list