[PATCH 4/4] nvme: check that EUI/GUID/UUID are globally unique

Luis Chamberlain mcgrof at kernel.org
Thu Apr 7 18:04:59 PDT 2022


On Thu, Feb 24, 2022 at 08:28:45PM +0100, Christoph Hellwig wrote:
> Add a check to verify that the unique identifiers are unique globally
> in addition to the existing check that verifies that they are unique
> inside a single subsystem.
> 
> Signed-off-by: Christoph Hellwig <hch at lst.de>
> ---
>  drivers/nvme/host/core.c | 38 ++++++++++++++++++++++++++++++++++++--
>  1 file changed, 36 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index cece987ba1691..f8084ded69e50 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -3810,12 +3810,45 @@ static struct nvme_ns_head *nvme_alloc_ns_head(struct nvme_ctrl *ctrl,
>  	return ERR_PTR(ret);
>  }
>  
> +static int nvme_global_check_duplicate_ids(struct nvme_subsystem *this,
> +		struct nvme_ns_ids *ids)
> +{
> +	struct nvme_subsystem *s;
> +	int ret = 0;
> +
> +	/*
> +	 * Note that this check is racy as we try to avoid holding the global
> +	 * lock over the whole ns_head creation.  But it is only intended as
> +	 * a sanity check anyway.
> +	 */
> +	mutex_lock(&nvme_subsystems_lock);
> +	list_for_each_entry(s, &nvme_subsystems, entry) {
> +		if (s == this)
> +			continue;
> +		mutex_lock(&s->lock);
> +		ret = nvme_subsys_check_duplicate_ids(s, ids);
> +		mutex_unlock(&s->lock);
> +		if (ret)
> +			break;
> +	}
> +	mutex_unlock(&nvme_subsystems_lock);
> +
> +	return ret;
> +}
> +
>  static int nvme_init_ns_head(struct nvme_ns *ns, unsigned nsid,
>  		struct nvme_ns_ids *ids, bool is_shared)
>  {
>  	struct nvme_ctrl *ctrl = ns->ctrl;
>  	struct nvme_ns_head *head = NULL;
> -	int ret = 0;
> +	int ret;
> +
> +	ret = nvme_global_check_duplicate_ids(ctrl->subsys, ids);
> +	if (ret) {
> +		dev_err(ctrl->device,
> +			"globally duplicate IDs for nsid %d\n", nsid);

So what sort of meachanisms would break in nvme if we don't (as it was
before this patch)? The reason I ask is that it would seem tons of
instantiated qemu guests used the default, and don't set the UUID,
and so this is auto-generated as per the documentation [0]. However,
I'm suspecting the auto-generation may just be... a single value:

root at kdevops ~ # cat
/sys/devices/pci0000:00/0000:00:08.0/nvme/nvme0/nvme0n1/uuid
00000001-0000-0000-0000-000000000000
root at kdevops ~ # cat
/sys/devices/pci0000:00/0000:00:0b.0/nvme/nvme3/nvme3n1/uuid
00000001-0000-0000-0000-000000000000
root at kdevops ~ # cat
/sys/devices/pci0000:00/0000:00:0b.0/nvme/nvme3/nvme3n1/uuid
00000001-0000-0000-0000-000000000000
root at kdevops ~ # cat
/sys/devices/pci0000:00/0000:00:0a.0/nvme/nvme2/nvme2n1/uuid
00000001-0000-0000-0000-000000000000

This means I have to go manually try to decipher this out on my
guests.

Printing a warning would only make sense if this really isn't critical.
But as of now this patch will even prevent boot as some filesystems now
can't mount which were set to mount on /etc/fstab.

[0] https://qemu-project.gitlab.io/qemu/system/devices/nvme.html

  Luis



More information about the Linux-nvme mailing list