[PATCH v4 01/20] nvme-tcp-offload: Add nvme-tcp-offload - NVMeTCP HW offload ULP
Christoph Hellwig
hch at lst.de
Thu Jul 1 06:34:24 PDT 2021
> +/* Kernel includes */
> +/* Driver includes */
I think these comments are a little pointless.
> +int nvme_tcp_ofld_register_dev(struct nvme_tcp_ofld_dev *dev)
Can you spell out offload everywhere?
> +int nvme_tcp_ofld_report_queue_err(struct nvme_tcp_ofld_queue *queue)
> +{
> + /* Placeholder - invoke error recovery flow */
> +
> + return 0;
> +}
Please don't add any placeholders like this. The whole file is
still pretty small with all the patches applied, so no need to split it.
> +static struct nvmf_transport_ops nvme_tcp_ofld_transport = {
> + .name = "tcp_offload",
> + .module = THIS_MODULE,
> + .required_opts = NVMF_OPT_TRADDR,
> + .allowed_opts = NVMF_OPT_TRSVCID | NVMF_OPT_NR_WRITE_QUEUES |
> + NVMF_OPT_HOST_TRADDR | NVMF_OPT_CTRL_LOSS_TMO |
> + NVMF_OPT_RECONNECT_DELAY | NVMF_OPT_HDR_DIGEST |
> + NVMF_OPT_DATA_DIGEST | NVMF_OPT_NR_POLL_QUEUES |
> + NVMF_OPT_TOS | NVMF_OPT_HOST_IFACE,
> +};
> +
> +static int __init nvme_tcp_ofld_init_module(void)
> +{
> + nvmf_register_transport(&nvme_tcp_ofld_transport);
> +
> + return 0;
> +}
> +
> +static void __exit nvme_tcp_ofld_cleanup_module(void)
> +{
> + nvmf_unregister_transport(&nvme_tcp_ofld_transport);
> +}
Looking at the final result this doesn't do much. Assuming we want
to support these kinds of whacky offloads (which I'd rather not do),
the proper way would be to allow registering multiple transport_ops
structures for a given name rather adding an indirection that duplicates
a whole lot of code.
> + /* Offload device specific driver context */
> + int num_hw_vectors;
> +};
As far as I can tell this is about queues, not "vectors" of some kind.
> +struct nvme_tcp_ofld_req {
> + struct nvme_request req;
> + struct nvme_command nvme_cmd;
> + struct list_head queue_entry;
> + struct nvme_tcp_ofld_queue *queue;
> +
> + /* Offload device specific driver context */
> + void *private_data;
> +
> + /* async flag is used to distinguish between async and IO flow
> + * in common send_req() of nvme_tcp_ofld_ops.
> + */
> + bool async;
> +
> + void (*done)(struct nvme_tcp_ofld_req *req,
> + union nvme_result *result,
> + __le16 status);
This always points to nvme_tcp_ofld_req_done, why the costly indirection?
> + /* Error callback function */
> + int (*report_err)(struct nvme_tcp_ofld_queue *queue);
> +};
This seems to always point to nvme_tcp_ofld_report_queue_err, why the
indirection?
More information about the Linux-nvme
mailing list