[PATCH 3/3] nvme-tcp: per-controller I/O workqueues

Sagi Grimberg sagi at grimberg.me
Mon Jul 8 05:12:15 PDT 2024



On 08/07/2024 10:10, Hannes Reinecke wrote:
> From: Hannes Reinecke <hare at suse.de>
>
> Implement per-controller I/O workqueues to reduce workqueue contention
> during I/O and improve I/O performance.
>
> Performance comparison:
>                 baseline rx/tx    blk-mq   multiple workqueues
> 4k seq write:  449MiB/s 480MiB/s 524MiB/s 540MiB/s
> 4k rand write: 410MiB/s 481MiB/s 524MiB/s 539MiB/s
> 4k seq read:   478MiB/s 481MiB/s 566MiB/s 582MiB/s
> 4k rand read:  547MiB/s 480MiB/s 511MiB/s 633MiB/s

I am still puzzled by this one.

This is for 2 controllers? or more?
It is intresting that the rand read sees higher boost from the seq read.
Is this a nature of the SSD? What happens with null_blk ?

CCing Tejun. Is it possible that using two different workqueues
for a symmetrical workload is better than a single global workqueue?



More information about the Linux-nvme mailing list