[RFC PATCH v5 01/27] nvme-tcp-offload: Add nvme-tcp-offload - NVMeTCP HW offload ULP

Petr Mladek pmladek at suse.com
Tue Jun 8 02:28:53 PDT 2021


On Wed 2021-05-19 14:13:14, Shai Malin wrote:
> This patch will present the structure for the NVMeTCP offload common
> layer driver. This module is added under "drivers/nvme/host/" and future
> offload drivers which will register to it will be placed under
> "drivers/nvme/hw".
> This new driver will be enabled by the Kconfig "NVM Express over Fabrics
> TCP offload commmon layer".
> In order to support the new transport type, for host mode, no change is
> needed.
> 
> Each new vendor-specific offload driver will register to this ULP during
> its probe function, by filling out the nvme_tcp_ofld_dev->ops and
> nvme_tcp_ofld_dev->private_data and calling nvme_tcp_ofld_register_dev
> with the initialized struct.
> 
> The internal implementation:
> - tcp-offload.h:
>   Includes all common structs and ops to be used and shared by offload
>   drivers.
> 
> - tcp-offload.c:
>   Includes the init function which registers as a NVMf transport just
>   like any other transport.
> 
> Acked-by: Igor Russkikh <irusskikh at marvell.com>
> Signed-off-by: Dean Balandin <dbalandin at marvell.com>
> Signed-off-by: Prabhakar Kushwaha <pkushwaha at marvell.com>
> Signed-off-by: Omkar Kulkarni <okulkarni at marvell.com>
> Signed-off-by: Michal Kalderon <mkalderon at marvell.com>
> Signed-off-by: Ariel Elior <aelior at marvell.com>
> Signed-off-by: Shai Malin <smalin at marvell.com>
> Reviewed-by: Hannes Reinecke <hare at suse.de>

> --- a/drivers/nvme/host/Kconfig
> +++ b/drivers/nvme/host/Kconfig
> @@ -84,3 +84,19 @@ config NVME_TCP
>  	  from https://github.com/linux-nvme/nvme-cli.
>  
>  	  If unsure, say N.
> +
> +config NVME_TCP_OFFLOAD
> +	tristate "NVM Express over Fabrics TCP offload common layer"
> +	default m

Is this intentional, please?

> +	depends on INET
> +	depends on BLK_DEV_NVME
> +	select NVME_FABRICS
> +	help
> +	  This provides support for the NVMe over Fabrics protocol using
> +	  the TCP offload transport. This allows you to use remote block devices
> +	  exported using the NVMe protocol set.
> +
> +	  To configure a NVMe over Fabrics controller use the nvme-cli tool
> +	  from https://github.com/linux-nvme/nvme-cli.
> +
> +	  If unsure, say N.

I would expect that the default would be "n" so that people that are
not sure or do not care about NWMe just take the default. IMHO, it is
the usual behavior.

Best Regards,
Petr



More information about the Linux-nvme mailing list