[PATCH v7 00/23] nvme-tcp receive offloads

Sagi Grimberg sagi at grimberg.me
Thu Oct 27 03:35:48 PDT 2022


> Hi,
> 
> The nvme-tcp receive offloads series v7 was sent to both net-next and
> nvme.  It is the continuation of v5 which was sent on July 2021
> https://lore.kernel.org/netdev/20210722110325.371-1-borisp@nvidia.com/ .
> V7 is now working on a real HW.
> 
> The feature will also be presented in netdev this week
> https://netdevconf.info/0x16/session.html?NVMeTCP-Offload-%E2%80%93-Implementation-and-Performance-Gains
> 
> Currently the series is aligned to net-next, please update us if you will prefer otherwise.
> 
> Thanks,
> Shai, Aurelien

Hey Shai & Aurelien

Can you please add in the next time documentation of the limitations
that this offload has in terms of compatibility? i.e. for example (from
my own imagination):
1. bonding/teaming/other-stacking?
2. TLS (sw/hw)?
3. any sort of tunneling/overlay?
4. VF/PF?
5. any nvme features?
6. ...

And what are your plans to address each if at all.

Also, does this have a path to userspace? for example almost all
of the nvme-tcp targets live in userspace.

I don't think I see in the code any limits like the maximum
connections that can be offloaded on a single device/port. Can
you share some details on this?

Thanks.



More information about the Linux-nvme mailing list