[PATCH 0/1] nvmet: View subsystem connections on nvmeof target

Redouane BOUFENGHOUR redouane.boufenghour at shadow.tech
Wed Sep 20 08:36:01 PDT 2023


You can find the path here
https://lore.kernel.org/linux-nvme/20230920141205.26637-2-redouane.boufenghour@shadow.tech/

We chose to put the information in the configfs for nvmet which takes
up the spirit of the "target" module for connection information.

In my humble opinion, this information would be better located in the
sysfs so the kernel provides the info to userland. That being said,
splitting the source of truth for the nvmet module into two separate
pseudo-fs could generate some confusion which would make it harder to
users to find all available informations provided by the nvmet module.
This is one of the reason why in the "target" module this whole ordeal
is handled by a 2-step procedure: first, userland creates ACLs
directory (since we can't create a directory in the configfs from the
kernel) based on an attr filling "dynamic_sessions" and then the
kernel module fills the info in a dedicated configfs attr page in the
created directory. (The "target" implementation is available in
drivers/target/iscsi/iscsi_target_configfs.c in the
lio_target_tpg_dynamic_sessions_show function)

To sum it up, I think it all boils down to the question: Do you want
to maintain some similarities between target and nvmet modules or
should the nvmet module go its own way and implement this in a
different way ?

Regards,


Le mer. 20 sept. 2023 à 17:31, Redouane BOUFENGHOUR
<redouane.boufenghour at shadow.tech> a écrit :
>
> You can find the path here https://lore.kernel.org/linux-nvme/20230920141205.26637-2-redouane.boufenghour@shadow.tech/
>
> We chose to put the information in the configfs for nvmet which takes up the spirit of the "target" module for connection information.
> In my humble opinion, this information would be better located in the sysfs so the kernel provides the info to userland. That being said, splitting the source of truth for the nvmet module into two separate pseudo-fs could generate some confusion which would make it harder to users to find all available informations provided by the nvmet module. This is one of the reason why in the "target" module this whole ordeal is handled by a 2-step procedure: first, userland creates ACLs directory (since we can't create a directory in the configfs from the kernel) based on an attr filling "dynamic_sessions" and then the kernel module fills the info in a dedicated configfs attr page in the created directory. (The "target" implementation is available in drivers/target/iscsi/iscsi_target_configfs.c in the lio_target_tpg_dynamic_sessions_show function)
>
> To sum it up, I think it all boils down to the question: Do you want to maintain some similarities between target and nvmet modules or should the nvmet module go its own way and implement this in a different way ?
>
> Regards,
>
> Le mer. 20 sept. 2023 à 16:42, Sagi Grimberg <sagi at grimberg.me> a écrit :
>>
>>
>> > HI,
>> > We need to list the nqn and their IP connected to our target nvmeof server.
>> > In fact we can have several dozen connections on a machine and we need to know
>> > which server is connected to which subsystem.
>> > Currently there is no way to find out which machine is connected to which subsystem
>> > except through the nqn.
>> > The nqn can be modified by the initiator , so for us it’s not necessarily a source
>> > we can trust.
>> > We’ve created a POC to retrieve the nqn and associated ip when a client is connected
>> > to a subsystem.
>>
>> No patch attached afaict.
>>
>> > We’ve tested this poc on a tcp and rdma connection and it’s OK.
>> > For the POC we write to the attr_connected_ctrls file in the subsystems directory for
>> > each subsystem.
>> > We’re not sure this is the right implementation.
>> > Either we create a ctrls directory which will be created on the userland side and
>> > populated with information
>> > from the ctrl connected when the ctrl number folder is created.
>> > Or we use the host directory to add an entry when the machine is connected to write its
>> >   IP address, but this folder is also used by ACLs.
>> > Or we stick to a simple attribute listing all connected controllers nqn and addresses.
>> > That being said, given the reduced size of a configfs page and the large size of some
>> > addresses (ie IPv6 address), that would drastically limit the numbers of controllers
>> > and addresses we could list in this attribute.
>> > what do you think?
>>
>> I don't think that this should be a configfs attribute at all. And yes,
>> a single attribute is going to be unnecessarily limited.
>>
>> This type of information probably belongs in debugfs. It is possible to
>> have nvmet create a root in debugfs and build a hierarchy that the user
>> can traverse to get this information from something like:
>> /<debugfs>/nvmet/<subsys>/<ctrl>/
>>
>> A controller can expose different things like host_traddr and hostnqn.
>
>
>
> --
>
> Redouane Boufenghour
>
> Storage Engineer
>
> Join us at shadow.tech



-- 

Redouane Boufenghour

Storage Engineer

Join us at shadow.tech



More information about the Linux-nvme mailing list