[PATCH v23 06/20] nvme-tcp: Add DDP data-path
Aurelien Aptel
aaptel at nvidia.com
Thu Mar 7 07:44:13 PST 2024
Sagi Grimberg <sagi at grimberg.me> writes:
>> +static void nvme_tcp_complete_request(struct request *rq,
>> + __le16 status,
>> + union nvme_result result,
>> + __u16 command_id)
>> +{
>> + struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq);
>> +
>> + if (nvme_tcp_is_ddp_offloaded(req)) {
>> + req->nvme_status = status;
>
> this can just be called req->status I think.
Since req->status already exists, we have checked whether it can be
safely used instead of adding nvme_status and it seems to be ok.
We will remove nvme_status.
>> + req->result = result;
> I think it will be cleaner to always capture req->result and req->status
> regardless of ddp offload.
Sure, we will set status and result in the function before the offload
check:
static void nvme_tcp_complete_request(struct request *rq,
__le16 status,
union nvme_result result,
__u16 command_id)
{
struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq);
req->status = status;
req->result = result;
if (nvme_tcp_is_ddp_offloaded(req)) {
/* complete when teardown is confirmed to be done */
nvme_tcp_teardown_ddp(req->queue, rq);
return;
}
if (!nvme_try_complete_req(rq, status, result))
nvme_complete_rq(rq);
}
More information about the Linux-nvme
mailing list