[PATCHv6 RFC 1/3] nvme-multipath: Add visibility for round-robin io-policy
Nilay Shroff
nilay at linux.ibm.com
Tue Dec 24 03:31:35 PST 2024
On 12/24/24 16:31, Sagi Grimberg wrote:
>
>
>
> On 13/12/2024 6:18, Nilay Shroff wrote:
>> This patch helps add nvme native multipath visibility for round-robin
>> io-policy. It creates a "multipath" sysfs directory under head gendisk
>> device node directory and then from "multipath" directory it adds a link
>> to each namespace path device the head node refers.
>>
>> For instance, if we have a shared namespace accessible from two different
>> controllers/paths then we create a soft link to each path device from head
>> disk node as shown below:
>>
>> $ ls -l /sys/block/nvme1n1/multipath/
>> nvme1c1n1 -> ../../../../../pci052e:78/052e:78:00.0/nvme/nvme1/nvme1c1n1
>> nvme1c3n1 -> ../../../../../pci058e:78/058e:78:00.0/nvme/nvme3/nvme1c3n1
>>
>> In the above example, nvme1n1 is head gendisk node created for a shared
>> namespace and the namespace is accessible from nvme1c1n1 and nvme1c3n1
>> paths.
>>
>> For round-robin I/O policy, we could easily infer from the above output
>> that I/O workload targeted to nvme1n1 would toggle across paths nvme1c1n1
>> and nvme1c3n1.
>>
>> Signed-off-by: Nilay Shroff <nilay at linux.ibm.com>
>> ---
>> drivers/nvme/host/core.c | 3 ++
>> drivers/nvme/host/multipath.c | 81 +++++++++++++++++++++++++++++++++++
>> drivers/nvme/host/nvme.h | 18 ++++++--
>> drivers/nvme/host/sysfs.c | 14 ++++++
>> 4 files changed, 112 insertions(+), 4 deletions(-)
>>
>> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
>> index d169a30eb935..df4cc8a27385 100644
>> --- a/drivers/nvme/host/core.c
>> +++ b/drivers/nvme/host/core.c
>> @@ -3982,6 +3982,9 @@ static void nvme_ns_remove(struct nvme_ns *ns)
>> if (!nvme_ns_head_multipath(ns->head))
>> nvme_cdev_del(&ns->cdev, &ns->cdev_device);
>> +
>> + nvme_mpath_remove_sysfs_link(ns);
>> +
>> del_gendisk(ns->disk);
>> mutex_lock(&ns->ctrl->namespaces_lock);
>> diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
>> index a85d190942bd..8e2865df2f33 100644
>> --- a/drivers/nvme/host/multipath.c
>> +++ b/drivers/nvme/host/multipath.c
>> @@ -686,6 +686,8 @@ static void nvme_mpath_set_live(struct nvme_ns *ns)
>> kblockd_schedule_work(&head->partition_scan_work);
>> }
>> + nvme_mpath_add_sysfs_link(ns->head);
>
> Why do you add the link in set_live? Why not always set this link? it is after all, another
> path to the device?
>
It's because we would be able to create the link from head disk node to a device path node
only after the head node comes alive (i.e. head disk node entry is created under sysfs).
For instance, in the below example,
$ ls -l /sys/block/nvme1n1/multipath/
nvme1c1n1 -> ../../../../../pci052e:78/052e:78:00.0/nvme/nvme1/nvme1c1n1
nvme1c3n1 -> ../../../../../pci058e:78/058e:78:00.0/nvme/nvme3/nvme1c3n1
The nvme1n1 (head disk node) is added under sysfs (/sys/block/nvme1n1) from
the following code path:
nvme_mpath_set_live()
-> device_add_disk()
-> add_disk_fwnode()
-> device_add()
>From device_add() we add the head disk node entry under sysfs.
So it's essential for us to only add the link in nvme_mpath_set_live() after
the head disk node comes live. Hope it clarifies. Please let me know if you've
any further query.
Thanks,
--Nilay
More information about the Linux-nvme
mailing list