[PATCH v2 net-next 19/21] net/mlx5e: NVMEoTCP, data-path for DDP offload

Boris Pismenny borispismenny at gmail.com
Sun Jan 31 04:27:31 EST 2021


On 19/01/2021 6:36, David Ahern wrote:
> On 1/17/21 1:42 AM, Boris Pismenny wrote:
>> This is needed for a few reasons that are explained in detail
>> in the tcp-ddp offload documentation. See patch 21 overview
>> and rx-data-path sections. Our reasons are as follows:
> 
> I read the documentation patch, and it does not explain it and really
> should not since this is very mlx specific based on the changes.
> Different h/w will have different limitations. Given that, it would be
> best to enhance the patch description to explain why these gymnastics
> are needed for the skb.
> 

The text in the documentation that describes this trade-off:
''We remark that a single TCP packet may have numerous PDUs embedded
inside. NICs can choose to offload one or more of these PDUs according
to various trade-offs. Possibly, offloading such small PDUs is of little
value, and it is better to leave it to software. ``

Indeed, different HW may have other additional trade-offs. But, I
suspect that this one will be important for all.

>> 1) Each SKB may contain multiple PDUs. DDP offload doesn't operate on
>> PDU headers, so these are written in the receive ring. Therefore, we
>> need to rebuild the SKB to account for it. Additionally, due to HW
>> limitations, we will only offload the first PDU in the SKB.
> 
> Are you referring to LRO skbs here? I can't imagine going through this
> for 1500 byte packets that have multiple PDUs.
> 
> 

No, that is true for any skb, and non-LRO skbs in particular. Most SKBs
do not contain multiple PDUs, but the ones that do are handled
gracefully in this function.





More information about the Linux-nvme mailing list