[PATCH 1/4] nvme-tcp: per-controller I/O workqueues
Hannes Reinecke
hare at suse.de
Fri Jul 5 01:11:21 PDT 2024
On 7/5/24 09:10, Christoph Hellwig wrote:
> Btw, I don't think brd is what we should optimize for. brd does
> synchronous I/O from ->submit_bio which makes it very non-typical.
> Trying to get this as good as possible for QD=1 might be fine,
> but once we have deeper queue depth and/or bigger I/O size it will
> use a lot more time in the submission context (aka the workqueues here)
> than a real device.
>
Hmm. brd was the simplest choice to get a high-bandwidth target.
I'll check if a get a similar performance with null-blk.
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare at suse.de +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich
More information about the Linux-nvme
mailing list