[RFC PATCH 1/2] hw/nvme: add mi device
Stefan Hajnoczi
stefanha at redhat.com
Mon Jul 12 04:03:27 PDT 2021
On Fri, Jul 09, 2021 at 07:25:45PM +0530, Padmakar Kalghatgi wrote:
> The enclosed patch contains the implementation of certain
> commands of nvme-mi specification.The MI commands are useful
> to manage/configure/monitor the device.Eventhough the MI commands
> can be sent via the inband NVMe-MI send/recieve commands, the idea here is
> to emulate the sideband interface for MI.
>
> Since the nvme-mi specification deals in communicating
> to the nvme subsystem via. a sideband interface, in this
> qemu implementation, virtual-vsock is used for making the
> sideband communication, the guest VM needs to make the
> connection to the specific cid of the vsock of the qemu host.
>
> One needs to specify the following command in the launch to
> specify the nvme-mi device, cid and to setup the vsock:
> -device nvme-mi,bus=<nvme bus number>
> -device vhost-vsock-pci, guest-cid=<vsock cid>
>
> The following commands are tested with nvme-cli by hooking
> to the cid of the vsock as shown above and use the socket
> send/recieve commands to issue the commands and get the response.
>
> we are planning to push the changes for nvme-cli as well to test the
> MI functionality.
Is the purpose of this feature (-device nvme-mi) testing MI with QEMU's
NVMe implementation?
My understanding is that instead of inventing an out-of-band interface
in the form of a new paravirtualized device, you decided to use vsock to
send MI commands from the guest to QEMU?
> As the connection can be established by the guest VM at any point,
> we have created a thread which is looking for a connection request.
> Please suggest if there is a native/better way to handle this.
QEMU has an event-driven architecture and uses threads sparingly. When
it uses threads it uses qemu_create_thread() instead of
pthread_create(), but I suggest using qemu_set_fd_handler() or a
coroutine with QIOChannel to integrate into the QEMU event loop instead.
I didn't see any thread synchronization, so I'm not sure if accessing
NVMe state from the MI thread is safe. Changing the code to use QEMU's
event loop can solve that problem since there's no separate thread.
> This module makes use of the NvmeCtrl structure of the nvme module,
> to fetch relevant information of the nvme device which are used in
> some of the mi commands. Eventhough certain commands might require
> modification to the nvme module, currently we have currently refrained
> from making changes to the nvme module.
Why did you decide to implement -device nvme-mi as a device on
TYPE_NVME_BUS? If the NVMe spec somehow requires this then I'm surprised
that there's no NVMe bus interface (callbacks). It seems like this could
just as easily be a property of an NVMe controller -device
nvme,mi=on|off or -device nvme-subsys,mi=on|off? I'm probably just not
familiar enough with MI and NVMe architecture...
Stefan
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 488 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-nvme/attachments/20210712/b829a4e0/attachment-0001.sig>
More information about the Linux-nvme
mailing list