[PATCH v2] nvme: Fix zns drives without append support to export correct permissions
Pankaj Raghav
p.raghav at samsung.com
Wed Mar 16 02:34:23 PDT 2022
This commit 2f4c9ba23b88 ("nvme: export zoned namespaces without Zone
Append support read-only") exported zoned namespaces without append support
to be marked as ro. It does it by setting NVME_NS_FORCE_RO to the
ns->flags in nvme_update_zone_info and later nvme_update_disk_info will
check for this flag and set the disk as ro.
But later this commit 73d90386b559 ("nvme: cleanup zone information
initialization") rearranged nvme_update_disk_info to be called before
nvme_update_zone_info thereby not marking the disk as ro. The call order
cannot be just reverted because nvme_update_zone_info sets certain queue
parameters such as zone_write_granularity that depend on the prior call
to nvme_update_disk_info.
Add a helper nvme_set_disk_ro that can be called in nvme_update_zone_info
to set the permission for ZNS drives correctly.
Fixes: 73d90386b559 ("nvme: cleanup zone information initialization")
Signed-off-by: Pankaj Raghav <p.raghav at samsung.com>
---
Changes since v1:
- Add a new helper to set permission directly from nvme_update_zone_info
instead of calling nvme_update_disk_info again (Damien)
Note:
Christoph made a point that there is a race window where the disk will
be marked as writable during revalidate zones.
I already responded to the comment in the email thread as it will not be
the case as we mark the disk as ro before we start revalidating the
disk.
https://lore.kernel.org/all/1bbc81a0-19f0-433b-28c2-b22d28176e37@grimberg.me/T/#m0e19022babda81339188ee334551a5fb867abf4c
drivers/nvme/host/core.c | 3 +--
drivers/nvme/host/nvme.h | 5 +++++
drivers/nvme/host/zns.c | 1 +
3 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 51c08f206cbf..cde33f2a3a5a 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1855,8 +1855,7 @@ static void nvme_update_disk_info(struct gendisk *disk,
blk_queue_max_write_zeroes_sectors(disk->queue,
ns->ctrl->max_zeroes_sectors);
- set_disk_ro(disk, (id->nsattr & NVME_NS_ATTR_RO) ||
- test_bit(NVME_NS_FORCE_RO, &ns->flags));
+ set_disk_ro(disk, (id->nsattr & NVME_NS_ATTR_RO));
}
static inline bool nvme_first_scan(struct gendisk *disk)
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index e7ccdb119ede..b6800bdd6ea9 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -607,6 +607,11 @@ static inline bool nvme_is_path_error(u16 status)
return (status & 0x700) == 0x300;
}
+static inline void nvme_set_disk_mode_ro(struct nvme_ns *ns)
+{
+ set_disk_ro(ns->disk, test_bit(NVME_NS_FORCE_RO, &ns->flags));
+}
+
/*
* Fill in the status and result information from the CQE, and then figure out
* if blk-mq will need to use IPI magic to complete the request, and if yes do
diff --git a/drivers/nvme/host/zns.c b/drivers/nvme/host/zns.c
index 9f81beb4df4e..4ab685fa02b4 100644
--- a/drivers/nvme/host/zns.c
+++ b/drivers/nvme/host/zns.c
@@ -113,6 +113,7 @@ int nvme_update_zone_info(struct nvme_ns *ns, unsigned lbaf)
blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, q);
blk_queue_max_open_zones(q, le32_to_cpu(id->mor) + 1);
blk_queue_max_active_zones(q, le32_to_cpu(id->mar) + 1);
+ nvme_set_disk_mode_ro(ns);
free_data:
kfree(id);
return status;
--
2.25.1
More information about the Linux-nvme
mailing list