[bug report] iommu_dma_unmap_sg() is very slow then running IO from remote numa node
Ming Lei
ming.lei at redhat.com
Tue Jul 27 18:32:17 PDT 2021
On Mon, Jul 26, 2021 at 3:51 PM John Garry <john.garry at huawei.com> wrote:
>
> On 23/07/2021 11:21, Ming Lei wrote:
> >> Thanks, I was also going to suggest the latter, since it's what
> >> arm_smmu_cmdq_issue_cmdlist() does with IRQs masked that should be most
> >> indicative of where the slowness most likely stems from.
> > The improvement from 'iommu.strict=0' is very small:
> >
>
> Have you tried turning off the IOMMU to ensure that this is really just
> an IOMMU problem?
>
> You can try setting CONFIG_ARM_SMMU_V3=n in the defconfig or passing
> cmdline param iommu.passthrough=1 to bypass the the SMMU (equivalent to
> disabling for kernel drivers).
Bypassing SMMU via iommu.passthrough=1 basically doesn't make a difference
on this issue.
And from fio log, submission latency is good, but completion latency
is pretty bad,
and maybe it is something that writing to PCI memory isn't committed to HW in
time?
BTW, adding one mb() at the exit of nvme_queue_rq() doesn't make a difference.
Follows the fio log after passing iommu.passthrough=1:
[root at ampere-mtjade-04 ~]# taskset -c 0 ~/git/tools/test/nvme/io_uring
10 1 /dev/nvme1n1 4k
+ fio --bs=4k --ioengine=io_uring --fixedbufs --registerfiles --hipri
--iodepth=64 --iodepth_batch_submit=16 --iodepth_batch_complete_min=16
--filename=/dev/nvme1n1 --direct=1 --runtime=10 --numjobs=1
--rw=randread --name=test --group_reporting
test: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T)
4096B-4096B, ioengine=io_uring, iodepth=64
fio-3.27
Starting 1 process
Jobs: 1 (f=1): [r(1)][100.0%][r=1538MiB/s][r=394k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=3053: Tue Jul 27 20:57:04 2021
read: IOPS=393k, BW=1536MiB/s (1611MB/s)(15.0GiB/10001msec)
slat (usec): min=12, max=343, avg=18.54, stdev= 3.47
clat (usec): min=46, max=487, avg=140.15, stdev=22.72
lat (usec): min=63, max=508, avg=158.72, stdev=22.29
clat percentiles (usec):
| 1.00th=[ 87], 5.00th=[ 104], 10.00th=[ 113], 20.00th=[ 123],
| 30.00th=[ 130], 40.00th=[ 135], 50.00th=[ 141], 60.00th=[ 145],
| 70.00th=[ 151], 80.00th=[ 159], 90.00th=[ 167], 95.00th=[ 176],
| 99.00th=[ 196], 99.50th=[ 206], 99.90th=[ 233], 99.95th=[ 326],
| 99.99th=[ 392]
bw ( MiB/s): min= 1533, max= 1539, per=100.00%, avg=1537.99,
stdev= 1.36, samples=19
iops : min=392672, max=394176, avg=393724.63, stdev=348.25, samples=19
lat (usec) : 50=0.01%, 100=3.64%, 250=96.30%, 500=0.06%
cpu : usr=17.58%, sys=82.03%, ctx=1113, majf=0, minf=5
IO depths : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwts: total=3933712,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
READ: bw=1536MiB/s (1611MB/s), 1536MiB/s-1536MiB/s
(1611MB/s-1611MB/s), io=15.0GiB (16.1GB), run=10001-10001msec
Disk stats (read/write):
nvme1n1: ios=3890950/0, merge=0/0, ticks=529137/0, in_queue=529137,
util=99.15%
[root at ampere-mtjade-04 ~]#
[root at ampere-mtjade-04 ~]# taskset -c 80
~/git/tools/test/nvme/io_uring 10 1 /dev/nvme1n1 4k
+ fio --bs=4k --ioengine=io_uring --fixedbufs --registerfiles --hipri
--iodepth=64 --iodepth_batch_submit=16 --iodepth_batch_complete_min=16
--filename=/dev/nvme1n1 --direct=1 --runtime=10 --numjobs=1
--rw=randread --name=test --group_reporting
test: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T)
4096B-4096B, ioengine=io_uring, iodepth=64
fio-3.27
Starting 1 process
Jobs: 1 (f=1): [r(1)][100.0%][r=150MiB/s][r=38.4k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=3062: Tue Jul 27 20:57:23 2021
read: IOPS=38.4k, BW=150MiB/s (157MB/s)(1501MiB/10002msec)
slat (usec): min=14, max=376, avg=20.21, stdev= 4.66
clat (usec): min=439, max=2457, avg=1640.85, stdev=17.01
lat (usec): min=559, max=2494, avg=1661.09, stdev=15.67
clat percentiles (usec):
| 1.00th=[ 1614], 5.00th=[ 1631], 10.00th=[ 1647], 20.00th=[ 1647],
| 30.00th=[ 1647], 40.00th=[ 1647], 50.00th=[ 1647], 60.00th=[ 1647],
| 70.00th=[ 1647], 80.00th=[ 1647], 90.00th=[ 1647], 95.00th=[ 1647],
| 99.00th=[ 1647], 99.50th=[ 1663], 99.90th=[ 1729], 99.95th=[ 1827],
| 99.99th=[ 2057]
bw ( KiB/s): min=153600, max=153984, per=100.00%, avg=153876.21,
stdev=88.10, samples=19
iops : min=38400, max=38496, avg=38469.05, stdev=22.02, samples=19
lat (usec) : 500=0.01%, 1000=0.01%
lat (msec) : 2=99.96%, 4=0.03%
cpu : usr=2.00%, sys=97.65%, ctx=1056, majf=0, minf=5
IO depths : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwts: total=384288,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
READ: bw=150MiB/s (157MB/s), 150MiB/s-150MiB/s (157MB/s-157MB/s),
io=1501MiB (1574MB), run=10002-10002msec
Disk stats (read/write):
nvme1n1: ios=380266/0, merge=0/0, ticks=554940/0, in_queue=554940, util=99.22%
More information about the Linux-nvme
mailing list