[PATCH 1/4] nvme-tcp: per-controller I/O workqueues
Jens Axboe
axboe at kernel.dk
Fri Jul 5 01:16:55 PDT 2024
On 7/5/24 2:11 AM, Hannes Reinecke wrote:
> On 7/5/24 09:10, Christoph Hellwig wrote:
>> Btw, I don't think brd is what we should optimize for. brd does
>> synchronous I/O from ->submit_bio which makes it very non-typical.
>> Trying to get this as good as possible for QD=1 might be fine,
>> but once we have deeper queue depth and/or bigger I/O size it will
>> use a lot more time in the submission context (aka the workqueues here)
>> than a real device.
Agree, using brd is the backend is useless if you want to optimize
for the real world, and may be actively misleading.
> Hmm. brd was the simplest choice to get a high-bandwidth target.
> I'll check if a get a similar performance with null-blk.
Just use a normal flash drive? Even basic drives these days do
millions of iops and 7-8GB/sec.
--
Jens Axboe
More information about the Linux-nvme
mailing list