NVMf initiator persistent across boots

Johannes Thumshirn jthumshirn at suse.de
Thu Mar 8 02:13:13 PST 2018


On Thu, Mar 08, 2018 at 12:03:18PM +0200, Max Gurtovoy wrote:
> 
> 
> On 3/8/2018 10:39 AM, Johannes Thumshirn wrote:
> > On Wed, Mar 07, 2018 at 02:47:52PM +0200, Max Gurtovoy wrote:
> > > > [Unit]
> > > > Description=NVMf auto discovery service
> > > > After=systemd-modules-load.service network-online.target
> > > > 
> > > > [Service]
> > > > Type=oneshot
> > > > ExecStart=/usr/bin/nvme connect-all
> > > > StandardOutput=journal
> > > > 
> > > > [Timer]
> > > > OnUnitActiveSec=1min
> > > > 
> > > > [Install]
> > > > WantedBy=multi-user.target timers.target
> > > > -- 
> > > > 
> > > > That would simply run nvme connect-all once every say minute.
> > > > The only problem is that it relies on the kernel to fail
> > > > duplicate subsystems. We could enforce that in nvme-cli for that
> > > > matter though (we can compare against sysfs address and subsysnqn).
> > > 
> > > Yes, we can also add flag to nvme discover command to add the parameters to
> > > the discovery file in case they are not exist.
> > > 
> > > 
> > > > 
> > > > Johannes is probably one to know better than me if this is the
> > > > correct way to go...
> > > 
> > > Johannes, any comment ?
> > >  From what I tried, we need to create a .timer and .service files for
> > > systemd...
> > 
> > Sorry I was on FTO for some days.
> > 
> > One thing that I feel is missing in this whole "let's just call nvme
> > connect-all and we're done" discussion is, we currently can't really
> > specifiy how many connections to the target we want to initiate.
> > 
> > I usually run several nvme connect calls with different --host-traddr
> > arguments to connect from multiple HCAs to the target.
> > 
> > I tried to hack the ability to specify a list of host traddrs for nvme
> > connect and connect-all and then just loop the connect cann in
> > nvme-cli but this somehow feels wrong.
> 
> Why not adding -w <host-traddr> to the /etc/nvme/discovery.conf file lines ?
> I guess connect-all will use it too, right ? (need to check it).

I meant multiple -w arguments. My usual connect is:
nvme connect -t rdma -a 1.1.1.1 -n nvme-test -s 4420 -w 1.1.1.2
nvme connect -t rdma -a 1.1.1.1 -n nvme-test -s 4420 -w 1.1.1.3
to connect from both HCAs to the target.

> 
> > 
> > For the systemd service + timer units above, they look good on the
> > first sight, but I'm not sure I like the unconditional "polling" of
> > the connect-all call.
> 
> Yes this is no the perfect solution but I guess future discovery_manager
> should solve it, but we need to find solution till then.
> 
> I tought about adding new commands/flags to add/remove parameters to
> /etc/nvme/discovery.conf.
> how about adding:
> nvme persist-add -t rdma -a 11.11.11.11 -s 4420 -w 11.12.12.12 (will add to
> discovery file "-t rdma -a 11.11.11.11 -s 4420 -w 11.12.12.12")
> nvme persist-remove -t rdma -a 11.11.11.11 -s 4420 -w 11.11.12.12 (will
> remove "-t rdma -a 11.11.11.11 -s 4420 -w 11.12.12.12" from discovery file).
> 
> Or it's better to add flags to nvme discover ?

Good question. I'd be fine with both. The only thing that should be
considered as well is, if you already need the nvmf connection in the
initrd, you'll have to rebuild the initrd after each change of
/etc/nvme/discovery.conf and thus every nvme
persist-add/persist-remove (or discover --persistent) call.

Byte,
	Johannes

-- 
Johannes Thumshirn                                          Storage
jthumshirn at suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850



More information about the Linux-nvme mailing list