[PATCH v21 05/20] nvme-tcp: Add DDP offload control path
Max Gurtovoy
mgurtovoy at nvidia.com
Wed Dec 20 03:30:25 PST 2023
On 18/12/2023 22:00, Aurelien Aptel wrote:
> Max Gurtovoy <mgurtovoy at nvidia.com> writes:
>>> @@ -739,6 +937,9 @@ static int nvme_tcp_recv_pdu(struct nvme_tcp_queue *queue, struct sk_buff *skb,
>>> size_t rcv_len = min_t(size_t, *len, queue->pdu_remaining);
>>> int ret;
>>>
>>> + if (test_bit(NVME_TCP_Q_OFF_DDP, &queue->flags))
>>> + nvme_tcp_resync_response(queue, skb, *offset);
>>
>> lets try to optimize the fast path with:
>>
>> if (IS_ENABLED(CONFIG_ULP_DDP) && test_bit(NVME_TCP_Q_OFF_DDP,
>> &queue->flags))
>> nvme_tcp_resync_response(queue, skb, *offset);
>>
>
> For this one, when ULP_DDP is disabled, I do see 1 extra mov instruction
> but no branching... I think it's negligible personally.
>
> $ gdb drivers/nvme/host/nvme-tcp.ko
> (gdb) disass /s nvme_tcp_recv_skb
> ...
> 1088 static int nvme_tcp_recv_pdu(struct nvme_tcp_queue *queue, struct sk_buff *skb,
> 1089 unsigned int *offset, size_t *len)
> 1090 {
> 1091 struct nvme_tcp_hdr *hdr;
> 1092 char *pdu = queue->pdu;
> 0x00000000000046a6 <+118>: mov %rsi,-0x70(%rbp)
>
> 880 return (queue->pdu_remaining) ? NVME_TCP_RECV_PDU :
> 0x00000000000046aa <+122>: test %ebx,%ebx
> 0x00000000000046ac <+124>: je 0x4975 <nvme_tcp_recv_skb+837>
>
> 1093 size_t rcv_len = min_t(size_t, *len, queue->pdu_remaining);
> 0x00000000000046b2 <+130>: cmp %r14,%rbx
>
> 1100 &pdu[queue->pdu_offset], rcv_len);
> 0x00000000000046b5 <+133>: movslq 0x19c(%r12),%rdx
>
> 1099 ret = skb_copy_bits(skb, *offset,
> 0x00000000000046bd <+141>: mov -0x58(%rbp),%rdi
>
> 1093 size_t rcv_len = min_t(size_t, *len, queue->pdu_remaining);
> 0x00000000000046c1 <+145>: cmova %r14,%rbx
>
> ./arch/x86/include/asm/bitops.h:
> 205 return ((1UL << (nr & (BITS_PER_LONG-1)))
> 0x00000000000046c5 <+149>: mov 0x1d8(%r12),%rax
>
> Extra mov of queue->flags offset here ^^^^^^^^
>
> (gdb) p &((struct nvme_tcp_queue *)0)->flags
> $1 = (unsigned long *) 0x1d8
Ok we can keep it as is.
Sagi,
any comments on the NVMf patches or on others before we send next version ?
we would like to make it merge for the 6_8 window..
More information about the Linux-nvme
mailing list