[PATCH RFCv1 00/14] Add Tegra241 (Grace) CMDQV Support (part 2/2)

Nicolin Chen nicolinc at nvidia.com
Wed May 22 20:09:12 PDT 2024


On Wed, May 22, 2024 at 11:43:51PM +0000, Tian, Kevin wrote:
> > From: Jason Gunthorpe <jgg at nvidia.com>
> > Sent: Thursday, May 23, 2024 7:29 AM
> > On Wed, May 22, 2024 at 12:47:19PM -0700, Nicolin Chen wrote:
> > > Yea, SMMU also has Event Queue and PRI queue. Though I haven't
> > > got time to sit down to look at Baolu's work closely, the uAPI
> > > seems to be a unified one for all IOMMUs. And though I have no
> > > intention to be against that design, yet maybe there could be
> > > an alternative in a somewhat HW specific language as we do for
> > > invalidation? Or not worth it?
> >
> > I was thinking not worth it, I expect a gain here is to do as AMD has
> > done and make the HW dma the queues directly to guest memory.
> >
> > IMHO the primary issue with the queues is DOS, as having any shared
> > queue across VMs is dangerous in that way. Allowing each VIOMMU to
> > have its own private queue and own flow control helps with that.
> >
> 
> and also shorter delivering path with less data copy?

Should I interpret that as a yes for fault report via VQUEUE?

We only have AMD that can HW dma the events to the guest queue
memory. Others all need a backward translation of (at least) a
physical dev ID to a virtual dev ID. This is now doable in the
kernel by the ongoing vdev_id design by the way. So kernel then
can write the guest memory directly to report events?

Thanks
Nicolin



More information about the linux-arm-kernel mailing list