[PATCH v2] nvme-tcp: use in-capsule data for I/O connect

Caleb Sander csander at purestorage.com
Mon Jul 11 12:18:54 PDT 2022


> I think that the question was, does a controller that suffers from this
> inefficiency exist? Up until now the controllers seen in the wild all
> support ioccsz that accepts IO connect to go in-capsule.

No, I don't know of a controller that reports IOCCSZ < 64 + 1024.
I discovered this behavior when writing a mock nvme-tcp target
reporting IOCCSZ = 64.

On Sun, Jul 10, 2022 at 4:05 AM Sagi Grimberg <sagi at grimberg.me> wrote:
>
>
> >> While I'm fine with this change, can you please state here why this
> >> is important?  Is there a use case where it really matters?  A controller
> >> that is unhappy if this doesn't happen?
> >
> > It's just an optimization. Without this change, we send the connect command
> > capsule and data in separate PDUs (CapsuleCmd and H2CData), and must wait for
> > the controller to respond with an R2T PDU before sending the H2CData.
> > With the change, we send a single CapsuleCmd PDU that includes the data.
> > This reduces the number of bytes (and likely packets) sent across the network,
> > and simplifies the send state machine handling in the driver.
> >
> > Using in-capsule data does not "really matter" for admin commands either,
> > but we appear to have decided that the optimization is worth it.
> > So I am just suggesting we extend the logic to the I/O connect command.
>
> I think that the question was, does a controller that suffers from this
> inefficiency exist? Up until now the controllers seen in the wild all
> support ioccsz that accepts IO connect to go in-capsule.



More information about the Linux-nvme mailing list