[PATCH RFCv1 00/14] Add Tegra241 (Grace) CMDQV Support (part 2/2)
Tian, Kevin
kevin.tian at intel.com
Wed May 22 16:43:51 PDT 2024
> From: Jason Gunthorpe <jgg at nvidia.com>
> Sent: Thursday, May 23, 2024 7:29 AM
>
> On Wed, May 22, 2024 at 12:47:19PM -0700, Nicolin Chen wrote:
> > On Wed, May 22, 2024 at 01:48:18PM -0300, Jason Gunthorpe wrote:
> > > On Wed, May 22, 2024 at 08:40:00AM +0000, Tian, Kevin wrote:
> > > > > From: Nicolin Chen <nicolinc at nvidia.com>
> > > > > Sent: Saturday, April 13, 2024 11:47 AM
> > > > >
> > > > > This is an experimental RFC series for VIOMMU infrastructure, using
> NVIDIA
> > > > > Tegra241 (Grace) CMDQV as a test instance.
> > > > >
> > > > > VIOMMU obj is used to represent a virtual interface (iommu) backed
> by an
> > > > > underlying IOMMU's HW-accelerated feature for virtualizaion: for
> example,
> > > > > NVIDIA's VINTF (v-interface for CMDQV) and AMD"s vIOMMU.
> > > > >
> > > > > VQUEUE obj is used to represent a virtual command queue (buffer)
> backed
> > > > > by
> > > > > an underlying IOMMU command queue to passthrough for VMs to
> use
> > > > > directly:
> > > > > for example, NVIDIA's Virtual Command Queue and AMD's Command
> Buffer.
> > > > >
> > > >
> > > > is VCMDQ more accurate? AMD also supports fault queue passthrough
> > > > then VQUEUE sounds broader than a cmd queue...
> > >
> > > Is there a reason VQUEUE couldn't handle the fault/etc queues too? The
> > > only difference is direction, there is still a doorbell/etc.
No reason. the description made it specific to a cmd queue which
led me the impression that we may want to create a separate
fault queue.
> >
> > Yea, SMMU also has Event Queue and PRI queue. Though I haven't
> > got time to sit down to look at Baolu's work closely, the uAPI
> > seems to be a unified one for all IOMMUs. And though I have no
> > intention to be against that design, yet maybe there could be
> > an alternative in a somewhat HW specific language as we do for
> > invalidation? Or not worth it?
>
> I was thinking not worth it, I expect a gain here is to do as AMD has
> done and make the HW dma the queues directly to guest memory.
>
> IMHO the primary issue with the queues is DOS, as having any shared
> queue across VMs is dangerous in that way. Allowing each VIOMMU to
> have its own private queue and own flow control helps with that.
>
and also shorter delivering path with less data copy?
More information about the linux-arm-kernel
mailing list