[RFC PATCH 1/2] hw/nvme: add mi device

Stefan Hajnoczi stefanha at redhat.com
Tue Jul 13 02:37:23 PDT 2021


On Tue, Jul 13, 2021 at 06:30:28AM +0100, Christoph Hellwig wrote:
On Tue, Jul 13, 2021 at 06:30:28AM +0100, Christoph Hellwig wrote:
> On Mon, Jul 12, 2021 at 12:03:27PM +0100, Stefan Hajnoczi wrote:
> > Why did you decide to implement -device nvme-mi as a device on
> > TYPE_NVME_BUS? If the NVMe spec somehow requires this then I'm surprised
> > that there's no NVMe bus interface (callbacks). It seems like this could
> > just as easily be a property of an NVMe controller -device
> > nvme,mi=on|off or -device nvme-subsys,mi=on|off? I'm probably just not
> > familiar enough with MI and NVMe architecture...
> 
> I'm too far away from qemu these days to understand what TYPE_NVME_BUS
> is.  Bt NVMe-MI has tree possible transports:
> 
>  1) out of band through smbus.  This seems something that could be
>     trivially modelled in qemu
>  2) out of band over MCTP / PCIe VDM.
>  3) in band using NVMe admin commands that pass through MI commands

Thanks for explaining!

Common NVMe-MI code can be shared by -device nvme-mi-smbus, in-band NVMe
MI commands (part of -device nvme), a vsock transport, etc. This patch
has nvme_mi_admin_command() as the entry point to common MI code, so not
much needs to be done to achieve this.

My question about why -device nvme-mi was because this "device" doesn't
implement any bus interface (callbacks). The bus effectively just serves
as an owner of this device. The guest does not access the device via the
bus. So I'm not sure a -device is appropriate, it's an usual device.

If the device is kept, please name it -device nvme-mi-vsock so it's
clear this is the NVMe-MI vsock transport. I think the device could be
dropped and instead an -device nvme,mi-vsock=on|off property could be
added to enable the MI vsock transport on a specific NVMe controller.
This raises the question of whether the port number should be
configurable so multiple vsock Management Endpoints can coexist.

I don't have time to explore the architectural model, but here's the
link in case anyone wants to think through all the options for NVMe MI
Management Endpoints and how QEMU should model them:
"1.4 NVM Subsystem Architectural Model"
https://nvmexpress.org/wp-content/uploads/NVM-Express-Management-Interface-1.2-2021.06.02-Ratified.pdf

Stefan
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 488 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-nvme/attachments/20210713/bf079a24/attachment.sig>


More information about the Linux-nvme mailing list