NVMe over RDMA latency

Sagi Grimberg sagi at grimberg.me
Wed Jul 13 23:52:18 PDT 2016


> With real NVMe device on target, host see latency about 33us.
>
> root at host:~# fio t.job
> job1: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=1
> fio-2.9-3-g2078c
> Starting 1 process
> Jobs: 1 (f=1): [r(1)] [100.0% done] [113.1MB/0KB/0KB /s] [28.1K/0/0 iops] [eta 00m:00s]
> job1: (groupid=0, jobs=1): err= 0: pid=3139: Wed Jul 13 11:22:15 2016
>    read : io=2259.5MB, bw=115680KB/s, iops=28920, runt= 20001msec
>      slat (usec): min=1, max=195, avg= 2.62, stdev= 1.24
>      clat (usec): min=0, max=7962, avg=30.97, stdev=14.50
>       lat (usec): min=27, max=7968, avg=33.70, stdev=14.69
>
> And tested NVMe device locally on target, about 23us.
> So nvmeof added only about ~10us.
>
> That's nice!

I didn't understand what was changed?



More information about the Linux-nvme mailing list