[PATCH 00/16] IOMMU memory observability

Pasha Tatashin pasha.tatashin at soleen.com
Tue Nov 28 14:31:54 PST 2023


On Tue, Nov 28, 2023 at 4:34 PM Yosry Ahmed <yosryahmed at google.com> wrote:
>
> On Tue, Nov 28, 2023 at 12:49 PM Pasha Tatashin
> <pasha.tatashin at soleen.com> wrote:
> >
> > From: Pasha Tatashin <tatashin at google.com>
> >
> > IOMMU subsystem may contain state that is in gigabytes. Majority of that
> > state is iommu page tables. Yet, there is currently, no way to observe
> > how much memory is actually used by the iommu subsystem.
> >
> > This patch series solves this problem by adding both observability to
> > all pages that are allocated by IOMMU, and also accountability, so
> > admins can limit the amount if via cgroups.
> >
> > The system-wide observability is using /proc/meminfo:
> > SecPageTables:    438176 kB
> >
> > Contains IOMMU and KVM memory.
> >
> > Per-node observability:
> > /sys/devices/system/node/nodeN/meminfo
> > Node N SecPageTables:    422204 kB
> >
> > Contains IOMMU and KVM memory memory in the given NUMA node.
> >
> > Per-node IOMMU only observability:
> > /sys/devices/system/node/nodeN/vmstat
> > nr_iommu_pages 105555
> >
> > Contains number of pages IOMMU allocated in the given node.
>
> Does it make sense to have a KVM-only entry there as well?
>
> In that case, if SecPageTables in /proc/meminfo is found to be
> suspiciously high, it should be easy to tell which component is
> contributing most usage through vmstat. I understand that users can do
> the subtraction, but we wouldn't want userspace depending on that, in
> case a third class of "secondary" page tables emerges that we want to
> add to SecPageTables. The in-kernel implementation can do the
> subtraction for now if it makes sense though.

Hi Yosry,

Yes, another counter for KVM could be added. On the other hand KVM
only can be computed by subtracting one from another as there are only
two types of secondary page tables, KVM and IOMMU:

/sys/devices/system/node/node0/meminfo
Node 0 SecPageTables:    422204 kB

 /sys/devices/system/node/nodeN/vmstat
nr_iommu_pages 105555

KVM only = SecPageTables - nr_iommu_pages * PAGE_SIZE / 1024

Pasha

>
> >
> > Accountability: using sec_pagetables cgroup-v2 memory.stat entry.
> >
> > With the change, iova_stress[1] stops as limit is reached:
> >
> > # ./iova_stress
> > iova space:     0T      free memory:   497G
> > iova space:     1T      free memory:   495G
> > iova space:     2T      free memory:   493G
> > iova space:     3T      free memory:   491G
> >
> > stops as limit is reached.
> >
> > This series encorporates suggestions that came from the discussion
> > at LPC [2].
> >
> > [1] https://github.com/soleen/iova_stress
> > [2] https://lpc.events/event/17/contributions/1466
> >
> > Pasha Tatashin (16):
> >   iommu/vt-d: add wrapper functions for page allocations
> >   iommu/amd: use page allocation function provided by iommu-pages.h
> >   iommu/io-pgtable-arm: use page allocation function provided by
> >     iommu-pages.h
> >   iommu/io-pgtable-dart: use page allocation function provided by
> >     iommu-pages.h
> >   iommu/io-pgtable-arm-v7s: use page allocation function provided by
> >     iommu-pages.h
> >   iommu/dma: use page allocation function provided by iommu-pages.h
> >   iommu/exynos: use page allocation function provided by iommu-pages.h
> >   iommu/fsl: use page allocation function provided by iommu-pages.h
> >   iommu/iommufd: use page allocation function provided by iommu-pages.h
> >   iommu/rockchip: use page allocation function provided by iommu-pages.h
> >   iommu/sun50i: use page allocation function provided by iommu-pages.h
> >   iommu/tegra-smmu: use page allocation function provided by
> >     iommu-pages.h
> >   iommu: observability of the IOMMU allocations
> >   iommu: account IOMMU allocated memory
> >   vhost-vdpa: account iommu allocations
> >   vfio: account iommu allocations
> >
> >  Documentation/admin-guide/cgroup-v2.rst |   2 +-
> >  Documentation/filesystems/proc.rst      |   4 +-
> >  drivers/iommu/amd/amd_iommu.h           |   8 -
> >  drivers/iommu/amd/init.c                |  91 +++++-----
> >  drivers/iommu/amd/io_pgtable.c          |  13 +-
> >  drivers/iommu/amd/io_pgtable_v2.c       |  20 +-
> >  drivers/iommu/amd/iommu.c               |  13 +-
> >  drivers/iommu/dma-iommu.c               |   8 +-
> >  drivers/iommu/exynos-iommu.c            |  14 +-
> >  drivers/iommu/fsl_pamu.c                |   5 +-
> >  drivers/iommu/intel/dmar.c              |  10 +-
> >  drivers/iommu/intel/iommu.c             |  47 ++---
> >  drivers/iommu/intel/iommu.h             |   2 -
> >  drivers/iommu/intel/irq_remapping.c     |  10 +-
> >  drivers/iommu/intel/pasid.c             |  12 +-
> >  drivers/iommu/intel/svm.c               |   7 +-
> >  drivers/iommu/io-pgtable-arm-v7s.c      |   9 +-
> >  drivers/iommu/io-pgtable-arm.c          |   7 +-
> >  drivers/iommu/io-pgtable-dart.c         |  37 ++--
> >  drivers/iommu/iommu-pages.h             | 231 ++++++++++++++++++++++++
> >  drivers/iommu/iommufd/iova_bitmap.c     |   6 +-
> >  drivers/iommu/rockchip-iommu.c          |  14 +-
> >  drivers/iommu/sun50i-iommu.c            |   7 +-
> >  drivers/iommu/tegra-smmu.c              |  18 +-
> >  drivers/vfio/vfio_iommu_type1.c         |   8 +-
> >  drivers/vhost/vdpa.c                    |   3 +-
> >  include/linux/mmzone.h                  |   5 +-
> >  mm/vmstat.c                             |   3 +
> >  28 files changed, 415 insertions(+), 199 deletions(-)
> >  create mode 100644 drivers/iommu/iommu-pages.h
> >
> > --
> > 2.43.0.rc2.451.g8631bc7472-goog
> >
> >



More information about the Linux-rockchip mailing list