[PATCH 1/4] nvme-tcp: per-controller I/O workqueues
Hannes Reinecke
hare at suse.de
Thu Jul 4 00:36:50 PDT 2024
On 7/3/24 21:17, Tejun Heo wrote:
> Hello,
>
> On Wed, Jul 03, 2024 at 10:14:14PM +0300, Sagi Grimberg wrote:
> ...
>> None of these reasons are the claimed reason to use separate workqueues in
>> this patch. The claim is that it is more efficient, i.e. has less overhead.
>>
>> The commit msg is the following:
>> "Implement per-controller I/O workqueues to reduce workqueue contention
>> during I/O."
>
> Hmm... it's not impossible for the concurrency accounting in pool_workqueues
> to show up if the issue rate is *really* high but I'd be surprised if that
> actually matters given that the backend pool is shared. Maybe I'm missing
> something but I don't see a reason why multiple workqueues would be more
> efficient than a shared one.
>
Well, I seem to run into the 'really high' issue rate case:
unbound workqueue bound workqueue
single wq multi wq single wq multi wq
4k seq read: 247MiB/s 249MiB/s 263MiB/s 365MiB/s
4k rand read: 294MiB/s 305MiB/s 279MiB/s 307MiB/s
4k seq write: 504MiB/s 499MiB/s 521MiB/s 550MiB/s
4k rand write: 531MiB/s 536MiB/s 476MiB/s 453MiB/s
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare at suse.de +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich
More information about the Linux-nvme
mailing list