[Report] requests are submitted to hardware in reverse order from nvme/virtio-blk queue_rqs()

Ming Lei ming.lei at redhat.com
Wed Jan 24 20:23:44 PST 2024


On Thu, Jan 25, 2024 at 07:32:37AM +0900, Damien Le Moal wrote:
> On 1/25/24 00:41, Keith Busch wrote:
> > On Wed, Jan 24, 2024 at 07:59:54PM +0800, Ming Lei wrote:
> >> Requests are added to plug list in reverse order, and both virtio-blk
> >> and nvme retrieves request from plug list in order, so finally requests
> >> are submitted to hardware in reverse order via nvme_queue_rqs() or
> >> virtio_queue_rqs, see:
> >>
> >> 	io_uring       submit_bio  vdb      6302096     4096
> >> 	io_uring       submit_bio  vdb     12235072     4096
> >> 	io_uring       submit_bio  vdb      7682280     4096
> >> 	io_uring       submit_bio  vdb     11912464     4096
> >> 	io_uring virtio_queue_rqs  vdb     11912464     4096
> >> 	io_uring virtio_queue_rqs  vdb      7682280     4096
> >> 	io_uring virtio_queue_rqs  vdb     12235072     4096
> >> 	io_uring virtio_queue_rqs  vdb      6302096     4096
> >>
> >>
> >> May this reorder be one problem for virtio-blk and nvme-pci?
> > 
> > For nvme, it depends. Usually it's probably not a problem, though some
> > pci ssd's have optimizations for sequential IO that might not work if
> > these get reordered.
> 
> ZNS and zoned virtio-blk drives... Cannot use io_uring at the moment. But I do
> not thing we reliably can anyway, unless the issuer is CPU/ring aware and always
> issue writes to a zone using the same ring.

It isn't related with io_uring.

What matters is plug & none & queue_rqs(). If none is applied, any IOs
in single batch will be added to plug list, then dispatched to hardware
in reversed order via queue_rqs().

Thanks,
Ming




More information about the Linux-nvme mailing list