[LSF/MM/BPF TOPIC] Improving Zoned Storage Support
Jens Axboe
axboe at kernel.dk
Wed Jan 17 12:06:19 PST 2024
On 1/17/24 11:43 AM, Jens Axboe wrote:
> Certainly slower. Now let's try and have the scheduler place the same 4
> threads where it sees fit:
>
> IOPS=1.56M, BW=759MiB/s, IOS/call=32/31
>
> Yikes! That's still substantially more than 200K IOPS even with heavy
> contention, let's take a look at the profile:
>
> - 70.63% io_uring [kernel.kallsyms] [k] queued_spin_lock_slowpath
> - submitter_uring_fn
> - entry_SYSCALL_64
> - do_syscall_64
> - __se_sys_io_uring_enter
> - 70.62% io_submit_sqes
> blk_finish_plug
> __blk_flush_plug
> - blk_mq_flush_plug_list
> - 69.65% blk_mq_run_hw_queue
> blk_mq_sched_dispatch_requests
> - __blk_mq_sched_dispatch_requests
> + 60.61% dd_dispatch_request
> + 8.98% blk_mq_dispatch_rq_list
> + 0.98% dd_insert_requests
>
> which is exactly as expected, we're spending 70% of the CPU cycles
> banging on dd->lock.
Case in point, I spent 10 min hacking up some smarts on the insertion
and dispatch side, and then we get:
IOPS=2.54M, BW=1240MiB/s, IOS/call=32/32
or about a 63% improvement when running the _exact same thing_. Looking
at profiles:
- 13.71% io_uring [kernel.kallsyms] [k] queued_spin_lock_slowpath
reducing the > 70% of locking contention down to ~14%. No change in data
structures, just an ugly hack that:
- Serializes dispatch, no point having someone hammer on dd->lock for
dispatch when already running
- Serialize insertions, punt to one of N buckets if insertion is already
busy. Current insertion will notice someone else did that, and will
prune the buckets and re-run insertion.
And while I seriously doubt that my quick hack is 100% fool proof, it
works as a proof of concept. If we can get that kind of reduction with
minimal effort, well...
--
Jens Axboe
More information about the Linux-nvme
mailing list