[RFC PATCH 5/5] nvme-vfio: Add a document for the NVMe device
Jason Gunthorpe
jgg at ziepe.ca
Tue Dec 6 05:52:54 PST 2022
On Tue, Dec 06, 2022 at 02:09:01PM +0100, Christoph Hellwig wrote:
> On Tue, Dec 06, 2022 at 09:05:05AM -0400, Jason Gunthorpe wrote:
> > In this case Intel has a real PCI SRIOV VF to expose to the guest,
> > with a full VF RID.
>
> RID?
"Requester ID" - PCI SIG term that in Linux basically means you get to
assign an iommu_domain to the vfio device.
Compared to a mdev where many vfio devices will share the same RID and
cannot have iommu_domain's without using PASID.
> > The proper VFIO abstraction is the variant PCI
> > driver as this series does. We want to use the variant PCI drivers
> > because they properly encapsulate all the PCI behaviors (MSI, config
> > space, regions, reset, etc) without requiring re-implementation of this
> > in mdev drivers.
>
> I don't think the code in this series has any chance of actually
> working. There is a lot of state associated with a NVMe subsystem,
> controller and namespace, such as the serial number, subsystem NQN,
> namespace uniqueue identifiers, Get/Set features state, pending AENs,
> log page content. Just migrating from one device to another without
> capturing all this has no chance of actually working.
>From what I understood this series basically allows two Intel devices
to pass a big opaque blob of data. Intel didn't document what is in
that blob, so I assume it captures everything you mention above.
At least, that is the approach we have taken with mlx5. Every single
bit of device state is serialized into the blob and when the device
resumes it is indistinguishable from the original. Otherwise it is a
bug.
Jason
More information about the Linux-nvme
mailing list