[PATCH v2] nvme-tcp: use in-capsule data for I/O connect
Caleb Sander
csander at purestorage.com
Fri Jul 8 08:56:16 PDT 2022
> While I'm fine with this change, can you please state here why this
> is important? Is there a use case where it really matters? A controller
> that is unhappy if this doesn't happen?
It's just an optimization. Without this change, we send the connect command
capsule and data in separate PDUs (CapsuleCmd and H2CData), and must wait for
the controller to respond with an R2T PDU before sending the H2CData.
With the change, we send a single CapsuleCmd PDU that includes the data.
This reduces the number of bytes (and likely packets) sent across the network,
and simplifies the send state machine handling in the driver.
Using in-capsule data does not "really matter" for admin commands either,
but we appear to have decided that the optimization is worth it.
So I am just suggesting we extend the logic to the I/O connect command.
On Thu, Jul 7, 2022 at 10:00 PM Christoph Hellwig <hch at lst.de> wrote:
>
> On Thu, Jul 07, 2022 at 03:12:45PM -0600, Caleb Sander wrote:
> > >From the NVMe/TCP spec:
> > > The maximum amount of in-capsule data for Fabrics and Admin Commands
> > > is 8,192 bytes ... NVMe/TCP controllers must support in-capsule data
> > > for Fabrics and Admin Command Capsules
> >
> > Currently, command data is only sent in-capsule on the admin queue
> > or I/O queues that indicate support for it.
> > Send fabrics command data in-capsule for I/O queues too to avoid
> > needing a separate H2CData PDU for the connect command.
>
> While I'm fine with this change, can you please state here why this
> is important? Is there a use case where it really matters? A controller
> that is unhappy if this doesn't happen?
More information about the Linux-nvme
mailing list