[PATCH 4/4] nvme: check that EUI/GUID/UUID are globally unique

Klaus Jensen its at irrelevant.dk
Fri Apr 8 00:19:04 PDT 2022



On Fri, Apr 8, 2022, at 07:29, Christoph Hellwig wrote:
> On Thu, Apr 07, 2022 at 06:04:59PM -0700, Luis Chamberlain wrote:
>> On Thu, Feb 24, 2022 at 08:28:45PM +0100, Christoph Hellwig wrote:
>> > Add a check to verify that the unique identifiers are unique globally
>> > in addition to the existing check that verifies that they are unique
>> > inside a single subsystem.
>> > 
>> > Signed-off-by: Christoph Hellwig <hch at lst.de>
>> > ---
>> >  drivers/nvme/host/core.c | 38 ++++++++++++++++++++++++++++++++++++--
>> >  1 file changed, 36 insertions(+), 2 deletions(-)
>> > 
>> > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
>> > index cece987ba1691..f8084ded69e50 100644
>> > --- a/drivers/nvme/host/core.c
>> > +++ b/drivers/nvme/host/core.c
>> > @@ -3810,12 +3810,45 @@ static struct nvme_ns_head *nvme_alloc_ns_head(struct nvme_ctrl *ctrl,
>> >  	return ERR_PTR(ret);
>> >  }
>> >  
>> > +static int nvme_global_check_duplicate_ids(struct nvme_subsystem *this,
>> > +		struct nvme_ns_ids *ids)
>> > +{
>> > +	struct nvme_subsystem *s;
>> > +	int ret = 0;
>> > +
>> > +	/*
>> > +	 * Note that this check is racy as we try to avoid holding the global
>> > +	 * lock over the whole ns_head creation.  But it is only intended as
>> > +	 * a sanity check anyway.
>> > +	 */
>> > +	mutex_lock(&nvme_subsystems_lock);
>> > +	list_for_each_entry(s, &nvme_subsystems, entry) {
>> > +		if (s == this)
>> > +			continue;
>> > +		mutex_lock(&s->lock);
>> > +		ret = nvme_subsys_check_duplicate_ids(s, ids);
>> > +		mutex_unlock(&s->lock);
>> > +		if (ret)
>> > +			break;
>> > +	}
>> > +	mutex_unlock(&nvme_subsystems_lock);
>> > +
>> > +	return ret;
>> > +}
>> > +
>> >  static int nvme_init_ns_head(struct nvme_ns *ns, unsigned nsid,
>> >  		struct nvme_ns_ids *ids, bool is_shared)
>> >  {
>> >  	struct nvme_ctrl *ctrl = ns->ctrl;
>> >  	struct nvme_ns_head *head = NULL;
>> > -	int ret = 0;
>> > +	int ret;
>> > +
>> > +	ret = nvme_global_check_duplicate_ids(ctrl->subsys, ids);
>> > +	if (ret) {
>> > +		dev_err(ctrl->device,
>> > +			"globally duplicate IDs for nsid %d\n", nsid);
>> 
>> So what sort of meachanisms would break in nvme if we don't (as it was
>> before this patch)?
>
> The most directly visible problem is that this breaks the
> /dev/disk/by-id/ links - all devices would have to point to the same
> one, and which one actually shows up is somewhat random.  This basically
> means random corruption if people actually use the links or the uuid
> functionality in any other way.
>
> Note that we have alredy checked for this inside a controller for a
> long time, the commit just extended it across controllers.
>
>
>> The reason I ask is that it would seem tons of
>> instantiated qemu guests used the default, and don't set the UUID,
>> and so this is auto-generated as per the documentation [0]. However,
>> I'm suspecting the auto-generation may just be... a single value:
>
> Odd.  All my qemu VM don't show a UUID unless specifically set.
>

Odd indeed. With “legacy/single namespace” setup (drive parameter directly on the nvme device), the uuid, eui64 and nguid should be zeroed.

Using the new -device nvme-ns, QEMU will randomize the uuid. However the eui64 will be more static and only differ with the namespace id so it will not be unique across subsystems (this needs fixing in QEMU).

In any case, with -device nvme-ns, uuid and eui64 can be set explicitly with device parameters and that should be used if predictable behavior is required.

>> 
>> root at kdevops ~ # cat
>> /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme0/nvme0n1/uuid
>> 00000001-0000-0000-0000-000000000000
>
> And the 1 really seems like a bug in your particular set.  What qemu
> version is this?  Does it have any local or distro patches applied?

I cannot reproduce the 1 either, but QEMU might have spliced the maid into the uuid at some point in the past, can’t remember.



More information about the Linux-nvme mailing list