[PATCH v2 17/19] iommu/arm-smmu-v3: Add arm_smmu_viommu_cache_invalidate

Tian, Kevin kevin.tian at intel.com
Wed Sep 18 01:10:52 PDT 2024


> From: Jason Gunthorpe <jgg at nvidia.com>
> Sent: Saturday, September 14, 2024 10:51 PM
> 
> On Fri, Sep 13, 2024 at 02:33:59AM +0000, Tian, Kevin wrote:
> > > From: Jason Gunthorpe <jgg at nvidia.com>
> > > Sent: Thursday, September 12, 2024 7:08 AM
> > >
> > > On Wed, Sep 11, 2024 at 08:13:01AM +0000, Tian, Kevin wrote:
> > >
> > > > Probably there is a good reason e.g. for simplification or better
> > > > aligned with hw accel stuff. But it's not explained clearly so far.
> > >
> > > Probably the most concrete thing is if you have a direct assignment
> > > invalidation queue (ie DMA'd directly by HW) then it only applies to a
> > > single pIOMMU and invalidation commands placed there are unavoidably
> > > limited in scope.
> > >
> > > This creates a representation problem, if we have a vIOMMU that spans
> > > many pIOMMUs but invalidations do some subset how to do we model
> > > that. Just saying the vIOMMU is linked to the pIOMMU solves this
> > > nicely.
> > >
> >
> > yes that is a good reason.
> >
> > btw do we expect the VMM to try-and-fail when deciding whether a
> > new vIOMMU object is required when creating a new vdev?
> 
> I think there was some suggestion the getinfo could return this, but
> also I think qemu needs to have a command line that matches physical
> so maybe it needs some sysfs?
> 

My impression was that Qemu is moving away from directly accessing
sysfs (e.g. as the reason behind allowing Libvirt to pass in an opened 
cdev fd to Qemu). So probably getinfo makes more sense...



More information about the linux-arm-kernel mailing list