[PATCH] nvme-tcp: wait socket wmem to drain in queue stop

Michael Liang mliang at purestorage.com
Wed Apr 16 22:12:41 PDT 2025


On Tue, Apr 08, 2025 at 09:00:00PM +0000, Chaitanya Kulkarni wrote:
> On 4/4/25 22:48, Michael Liang wrote:
> > +static void nvme_tcp_stop_queue_wait(struct nvme_tcp_queue *queue)
> > +{
> > +	int timeout = 100;
> > +
> 
> is there a guarantee that above will work for all the setups?
> using configurable timeout values helps creating more generic
> fix, do we need to consider that here ?
The value here primarily reflects the latency between __tcp_transmit_skb()
and the freeing of the skb in the TX completion path. For most scenarios,
100ms should be sufficient. While it's theoretically possible to see higher
latencies, such cases might not be typical or practical for NVMe-TCP (please
correct me if I’m wrong).

That said, I'm open to making this timeout configurable if needed—perhaps
via a module parameter?

> > +	while (timeout > 0) {
> > +		if (!sk_wmem_alloc_get(queue->sock->sk))
> > +			return;
> > +		msleep(2);
> > +		timeout -= 2;
> > +	}
> > +	dev_warn(queue->ctrl->ctrl.device,
> > +		 "qid %d: wait draining sock wmem allocation timeout\n",
> > +		 nvme_tcp_queue_id(queue));
> > +}
> > +
> 
> -ck
> 
> 

Thanks,
Michael



More information about the Linux-nvme mailing list