[PATCH] nvme: report capacity 0 for non supported ZNS SSDs

Niklas Cassel Niklas.Cassel at wdc.com
Mon Nov 2 12:10:22 EST 2020


On Mon, Nov 02, 2020 at 02:04:11PM +0100, Javier González wrote:
> On 30.10.2020 14:31, Niklas Cassel wrote:
> > On Thu, Oct 29, 2020 at 07:57:53PM +0100, Javier González wrote:
> > > Allow ZNS SSDs to be presented to the host even when they implement
> > > features that are not supported by the kernel zoned block device.
> > > 
> > > Instead of rejecting the SSD at the NVMe driver level, deal with this in
> > > the block layer by setting capacity to 0, as we do with other things
> > > such as unsupported PI configurations. This allows to use standard
> > > management tools such as nvme-cli to choose a different format or
> > > firmware slot that is compatible with the Linux zoned block device.
> > > 
> > > Signed-off-by: Javier González <javier.gonz at samsung.com>
> > > ---
> > >  drivers/nvme/host/core.c |  5 +++++
> > >  drivers/nvme/host/nvme.h |  1 +
> > >  drivers/nvme/host/zns.c  | 31 ++++++++++++++-----------------
> > >  3 files changed, 20 insertions(+), 17 deletions(-)
> > > 
> > > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> > > index c190c56bf702..9ca4f0a6ff2c 100644
> > > --- a/drivers/nvme/host/core.c
> > > +++ b/drivers/nvme/host/core.c

(snip)

> > > @@ -44,20 +44,23 @@ int nvme_update_zone_info(struct gendisk *disk, struct nvme_ns *ns,
> > >  	struct nvme_id_ns_zns *id;
> > >  	int status;
> > > 
> > > -	/* Driver requires zone append support */
> > > +	ns->zone_sup = true;
> > 
> > I don't think it is wise to assign it to true here.
> > E.g. if kzalloc() failes, if nvme_submit_sync_cmd() fails,
> > or if nvme_set_max_append() fails, you have already set this to true,
> > but zoc or power of 2 checks were never performed.
> 
> I do not think it will matter much as it is just an internal variable.
> If any of the checks you mention fail, then the namespace will not even
> be initialized.
> 
> Is there anything I am missing?

We know that another function will perfom some operation (setting capacity
to 0), depending on ns->zone_sup. Therefore setting ns->zone_sup = true and
then later to false in the same function, introduces a theoretical race window.

IMHO, it just seems like a better coding practice to use a local variable,
so that the boolean is not true for a short while, for a ns that will be false
at the end of the function.

Kind regards,
Niklas

> 
> > Perhaps something like this would be more robust:
> > 
> > @@ -53,18 +53,19 @@ int nvme_update_zone_info(struct nvme_ns *ns, unsigned lbaf)
> >        struct nvme_command c = { };
> >        struct nvme_id_ns_zns *id;
> >        int status;
> > +       bool new_ns_supp = true;
> > +
> > +       /* default to NS not supported */
> > +       ns->zoned_ns_supp = false;
> > 
> > -       /* Driver requires zone append support */
> >        if (!(le32_to_cpu(log->iocs[nvme_cmd_zone_append]) &
> >                        NVME_CMD_EFFECTS_CSUPP)) {
> >                dev_warn(ns->ctrl->device,
> >                        "append not supported for zoned namespace:%d\n",
> >                        ns->head->ns_id);
> > -               return -EINVAL;
> > -       }
> > -
> > -       /* Lazily query controller append limit for the first zoned namespace */
> > -       if (!ns->ctrl->max_zone_append) {
> > +               new_ns_supp = false;
> > +       } else if (!ns->ctrl->max_zone_append) {
> > +               /* Lazily query controller append limit for the first zoned namespace */
> >                status = nvme_set_max_append(ns->ctrl);
> >                if (status)
> >                        return status;
> > @@ -80,19 +81,16 @@ int nvme_update_zone_info(struct nvme_ns *ns, unsigned lbaf)
> >        c.identify.csi = NVME_CSI_ZNS;
> > 
> >        status = nvme_submit_sync_cmd(ns->ctrl->admin_q, &c, id, sizeof(*id));
> > -       if (status)
> > -               goto free_data;
> > +       if (status) {
> > +               kfree(id);
> > +               return status;
> > +       }
> > 
> > -       /*
> > -        * We currently do not handle devices requiring any of the zoned
> > -        * operation characteristics.
> > -        */
> >        if (id->zoc) {
> >                dev_warn(ns->ctrl->device,
> >                        "zone operations:%x not supported for namespace:%u\n",
> >                        le16_to_cpu(id->zoc), ns->head->ns_id);
> > -               status = -EINVAL;
> > -               goto free_data;
> > +               new_ns_supp = false;
> >        }
> > 
> >        ns->zsze = nvme_lba_to_sect(ns, le64_to_cpu(id->lbafe[lbaf].zsze));
> > @@ -100,17 +98,14 @@ int nvme_update_zone_info(struct nvme_ns *ns, unsigned lbaf)
> >                dev_warn(ns->ctrl->device,
> >                        "invalid zone size:%llu for namespace:%u\n",
> >                        ns->zsze, ns->head->ns_id);
> > -               status = -EINVAL;
> > -               goto free_data;
> > +               new_ns_supp = false;
> >        }
> > 
> > +       ns->zoned_ns_supp = new_ns_supp;
> >        q->limits.zoned = BLK_ZONED_HM;
> >        blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, q);
> >        blk_queue_max_open_zones(q, le32_to_cpu(id->mor) + 1);
> >        blk_queue_max_active_zones(q, le32_to_cpu(id->mar) + 1);
> > -free_data:
> > -       kfree(id);
> > -       return status;
> > }
> > 
> > static void *nvme_zns_alloc_report_buffer(struct nvme_ns *ns,
> > 
> 
> Sure, we can use a local assignment as you suggest. I'll send a V2 with
> this.
> 
> Javier
> 


More information about the Linux-nvme mailing list