[PATCH v3] nvme: Fix zns drives without append support to export correct permissions

Pankaj Raghav p.raghav at samsung.com
Tue Mar 22 02:20:48 PDT 2022


This commit 2f4c9ba23b88 ("nvme: export zoned namespaces without Zone
Append support read-only") exported zoned namespaces without append support
to be marked as ro. It does it by setting NVME_NS_FORCE_RO to the
ns->flags in nvme_update_zone_info and later nvme_update_disk_info will
check for this flag and set the disk as ro.

But later this commit 73d90386b559 ("nvme: cleanup zone information
initialization") rearranged nvme_update_disk_info to be called before
nvme_update_zone_info thereby not marking the disk as ro. The call order
cannot be just reverted because nvme_update_zone_info sets certain queue
parameters such as zone_write_granularity that depend on the prior call
to nvme_update_disk_info.

Remove the call to set_disk_ro in nvme_update_disk_info.
Call set_disk_ro after nvme_update_zone_info and nvme_update_disk_info
to set the permission for ZNS drives correctly. The same applies to the
multipath disk path.

Fixes: 73d90386b559 ("nvme: cleanup zone information initialization")
Signed-off-by: Pankaj Raghav <p.raghav at samsung.com>
---
Changes since v1:
- Add a new helper to set permission directly from nvme_update_zone_info
  instead of calling nvme_update_disk_info again (Damien)

Changes since v2
- Adding a new call to set_disk_ro in nvme_update_zone_info will create
  a race window, and set the disk to write if all Zone requirements are
  satisfied even when id->nsattr has ro bit set. Call set_disk_ro
  directly after the calls to nvme_update_disk_info and
  nvme_update_zone_info to avoid incorrect behaviour. (Christoph)

- Open code the call to set_disk_ro as there are no other
  users(Chaitanya and Christoph)

 drivers/nvme/host/core.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 677fa4bf76d3..75c9c4fa9049 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1830,9 +1830,6 @@ static void nvme_update_disk_info(struct gendisk *disk,
 	nvme_config_discard(disk, ns);
 	blk_queue_max_write_zeroes_sectors(disk->queue,
 					   ns->ctrl->max_zeroes_sectors);
-
-	set_disk_ro(disk, (id->nsattr & NVME_NS_ATTR_RO) ||
-		test_bit(NVME_NS_FORCE_RO, &ns->flags));
 }
 
 static inline bool nvme_first_scan(struct gendisk *disk)
@@ -1891,6 +1888,8 @@ static int nvme_update_ns_info(struct nvme_ns *ns, struct nvme_id_ns *id)
 			goto out_unfreeze;
 	}
 
+	set_disk_ro(ns->disk, (id->nsattr & NVME_NS_ATTR_RO) ||
+		test_bit(NVME_NS_FORCE_RO, &ns->flags));
 	set_bit(NVME_NS_READY, &ns->flags);
 	blk_mq_unfreeze_queue(ns->disk->queue);
 
@@ -1903,6 +1902,9 @@ static int nvme_update_ns_info(struct nvme_ns *ns, struct nvme_id_ns *id)
 	if (nvme_ns_head_multipath(ns->head)) {
 		blk_mq_freeze_queue(ns->head->disk->queue);
 		nvme_update_disk_info(ns->head->disk, ns, id);
+		set_disk_ro(ns->head->disk,
+			    (id->nsattr & NVME_NS_ATTR_RO) ||
+				    test_bit(NVME_NS_FORCE_RO, &ns->flags));
 		nvme_mpath_revalidate_paths(ns);
 		blk_stack_limits(&ns->head->disk->queue->limits,
 				 &ns->queue->limits, 0);
-- 
2.25.1




More information about the Linux-nvme mailing list