[bug report] iommu_dma_unmap_sg() is very slow then running IO from remote numa node

John Garry john.garry at huawei.com
Fri Jul 9 04:04:24 PDT 2021


On 09/07/2021 11:26, Robin Murphy wrote:
> n 2021-07-09 09:38, Ming Lei wrote:
>> Hello,
>>
>> I observed that NVMe performance is very bad when running fio on one
>> CPU(aarch64) in remote numa node compared with the nvme pci numa node.
>>
>> Please see the test result[1] 327K vs. 34.9K.
>>
>> Latency trace shows that one big difference is in iommu_dma_unmap_sg(),
>> 1111 nsecs vs 25437 nsecs.
> 
> Are you able to dig down further into that? iommu_dma_unmap_sg() itself 
> doesn't do anything particularly special, so whatever makes a difference 
> is probably happening at a lower level, and I suspect there's probably 
> an SMMU involved. If for instance it turns out to go all the way down to 
> __arm_smmu_cmdq_poll_until_consumed() because polling MMIO from the 
> wrong node is slow, there's unlikely to be much you can do about that 
> other than the global "go faster" knobs (iommu.strict and 
> iommu.passthrough) with their associated compromises.

There was also the disable_msipolling option:

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c#n42

But I am not sure if that platform even supports MSI polling (or has 
smmu v3).

You could also try iommu.forcedac=1 cmdline option. But I doubt it will 
help since the issue was mentioned to be NUMA related.

> 
> Robin.
> 
>> [1] fio test & results
>>
>> 1) fio test result:
>>
>> - run fio on local CPU
>> taskset -c 0 ~/git/tools/test/nvme/io_uring 10 1 /dev/nvme1n1 4k
>> + fio --bs=4k --ioengine=io_uring --fixedbufs --registerfiles --hipri 
>> --iodepth=64 --iodepth_batch_submit=16 --iodepth_batch_complete_min=16 
>> --filename=/dev/nvme1n1 --direct=1 --runtime=10 --numjobs=1 
>> --rw=randread --name=test --group_reporting
>>
>> IOPS: 327K
>> avg latency of iommu_dma_unmap_sg(): 1111 nsecs
>>
>>
>> - run fio on remote CPU
>> taskset -c 80 ~/git/tools/test/nvme/io_uring 10 1 /dev/nvme1n1 4k
>> + fio --bs=4k --ioengine=io_uring --fixedbufs --registerfiles --hipri 
>> --iodepth=64 --iodepth_batch_submit=16 --iodepth_batch_complete_min=16 
>> --filename=/dev/nvme1n1 --direct=1 --runtime=10 --numjobs=1 
>> --rw=randread --name=test --group_reporting
>>
>> IOPS: 34.9K
>> avg latency of iommu_dma_unmap_sg(): 25437 nsecs
>>
>> 2) system info
>> [root at ampere-mtjade-04 ~]# lscpu | grep NUMA
>> NUMA node(s):                    2
>> NUMA node0 CPU(s):               0-79
>> NUMA node1 CPU(s):               80-159
>>
>> lspci | grep NVMe
>> 0003:01:00.0 Non-Volatile memory controller: Samsung Electronics Co 
>> Ltd NVMe SSD Controller SM981/PM981/PM983
>>
>> [root at ampere-mtjade-04 ~]# cat /sys/block/nvme1n1/device/device/numa_node 

Since it's ampere, I guess it's smmu v3.

BTW, if you remember, I did raise a performance issue of smmuv3 with 
NVMe before:
https://lore.kernel.org/linux-iommu/b2a6e26d-6d0d-7f0d-f222-589812f701d2@huawei.com/

I did have this series to improve performance for systems with lots of 
CPUs, like above, but not accepted:
https://lore.kernel.org/linux-iommu/1598018062-175608-1-git-send-email-john.garry@huawei.com/

Thanks,
John




More information about the Linux-nvme mailing list