nvmetcli restore fails in add_subsystem
Anton Gavriliuk
antosha20xx at gmail.com
Wed Apr 9 23:59:24 PDT 2025
I'm going to do tests with currently best available FC and Ethernet adapters.
I have Ethernet NICs
Device type: ConnectX7
Name: MCX755106AC-HEAT_HPE_Ax
Description: HPE InfiniBand NDR200/Ethernet 200GbE 2-port
QSFP112 PCIe5 x16 MCX755106AC-HEAT Adapter
and I will compare that with 64 Gbps dual-port Emulex HBAs.
Sure, for best performance network and storage traffic shouldn't be
mixed and good design is required. But at the same time, NVMe/TCP
will work anywhere iSCSI works, and this is an advantage. And also
there is a possibility to switch to RDMA (RoCEv2/iWARP/Infiniband) if
required using the same Ethernet NICs.
Over 20+ years I promoted FC as best for storage traffic. But now I
think it's time to change, Ethernet (NVMe/TCP) is already mature
enough at least for POCs, not just in raw performance, but also in
management, congestion control, security etc... Especially with
upcoming tx/rx zero copy techniques (io_uring and devmem-tcp)
Anton
чт, 10 апр. 2025 г. в 09:10, Hannes Reinecke <hare at suse.de>:
>
> On 4/9/25 13:58, Anton Gavriliuk wrote:
> > Hhmmm.....
> >
> > The link - https://documentation.suse.com/sles/15-SP6/html/SLES-all/cha-nvmeof.html
> >
> > is quite confused me,
> >
> > 17.4.3 Marvell
> >
> > FC-NVMe is supported on QLE269x and QLE27xx adapters. FC-NVMe support
> > is enabled by default in the Marvell® QLogic® QLA2xxx Fibre Channel
> > driver.
> >
> > So if "the qla2xxx driver or/and the HBA firmware do not support the
> > target mode anymore.", it is not clear, will NVMe/FC target work with
> > QLE269x and QLE27xx adapters or not.
> >
> It does not. The HBA model is irrelevant as the code to run nvme target
> mode on QLogic HBAs is not present.
>
> >> Though he was able to get the target mode running with a Emulex HBA (lpfc driver).
> >
> > Long live Emulex!
> >
> Kudos to James Smart; he was the main driver behind the NVMe-FC effort.
>
> > P.S., I don't like FC, but I want to do some NVMe/FC vs TCP
> > performance tests, so I must set up NVMe/FC for the tests.
> >
> I am not sure how far you get with that. There is a differential
> in transport speeds (10/25/50/100 for Ethernet, 16/32/64 for FC),
> and the best you can hope is to reach line speed.
> In my tests we reach line speed out of the box for 10GigE and
> 16G FC. I did some tests with 32G FC, but that was inconclusive
> as it _heavily_ depended on the hardware setup (How many CPUs?
> How many queues are exposed? Which backend did one use?).
> And I do expect the picture is even worse for 100GigE.
>
> But it'll be on my agenda to do some benchmarking here, too.
>
> From a practical side, FC is far more robust than Ethernet.
> Thing is, traffic on FC is well managed, and one can expect
> reliable performance characteristics throughout the lifetime
> of the connection.
> Ethernet, OTOH, is prone to outside influence as typically the
> switches are used for all traffic (not just I/O), and hence packet
> drops and performance drops are for more likely.
>
> Which in the end will be far more important than raw speed.
>
> Cheers,
>
> Hannes
> --
> Dr. Hannes Reinecke Kernel Storage Architect
> hare at suse.de +49 911 74053 688
> SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
> HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich
More information about the Linux-nvme
mailing list