[PATCH v3 9/9] KVM: selftests: Provide README.rst for KVM selftests runner
Brendan Jackman
jackmanb at google.com
Fri Oct 10 02:58:46 PDT 2025
On Tue Sep 30, 2025 at 4:36 PM UTC, Vipin Sharma wrote:
> Add README.rst for KVM selftest runner and explain how to use the
> runner.
>
> Signed-off-by: Vipin Sharma <vipinsh at google.com>
> ---
> tools/testing/selftests/kvm/.gitignore | 1 +
> tools/testing/selftests/kvm/runner/README.rst | 54 +++++++++++++++++++
> 2 files changed, 55 insertions(+)
> create mode 100644 tools/testing/selftests/kvm/runner/README.rst
>
> diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore
> index 548d435bde2f..83aa2fe01bac 100644
> --- a/tools/testing/selftests/kvm/.gitignore
> +++ b/tools/testing/selftests/kvm/.gitignore
> @@ -4,6 +4,7 @@
> !*.c
> !*.h
> !*.py
> +!*.rst
> !*.S
> !*.sh
> !*.test
> diff --git a/tools/testing/selftests/kvm/runner/README.rst b/tools/testing/selftests/kvm/runner/README.rst
> new file mode 100644
> index 000000000000..83b071c0a0e6
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/runner/README.rst
> @@ -0,0 +1,54 @@
> +KVM Selftest Runner
> +===================
> +
> +KVM selftest runner is highly configurable test executor that allows to run
> +tests with different configurations (not just the default), parallely, save
> +output to disk hierarchically, control what gets printed on console, provide
> +execution status.
> +
> +To generate default tests use::
> +
> + # make tests_install
> +
> +This will create ``testcases_default_gen`` directory which will have testcases
> +in `default.test` files. Each KVM selftest will have a directory in which
> +`default.test` file will be created with executable path relative to KVM
> +selftest root directory i.e. `/tools/testing/selftests/kvm`. For example, the
> +`dirty_log_perf_test` will have::
> +
> + # cat testcase_default_gen/dirty_log_perf_test/default.test
> + dirty_log_perf_test
> +
> +Runner will execute `dirty_log_perf_test`. Testcases files can also provide
> +extra arguments to the test::
> +
> + # cat tests/dirty_log_perf_test/2slot_5vcpu_10iter.test
> + dirty_log_perf_test -x 2 -v 5 -i 10
> +
> +In this case runner will execute the `dirty_log_perf_test` with the options.
> +
> +Example
> +=======
> +
> +To see all of the options::
> +
> + # python3 runner -h
> +
> +To run all of the default tests::
> +
> + # python3 runner -d testcases_default_gen
> +
> +To run tests parallely::
> +
> + # python3 runner -d testcases_default_gen -j 40
> +
> +To print only passed test status and failed test stderr::
> +
> + # python3 runner -d testcases_default_gen --print-passed status \
> + --print-failed stderr
> +
> +To run tests binary which are in some other directory (out of tree builds)::
> +
> + # python3 runner -d testcases_default_gen -p /path/to/binaries
I understand that for reasons of velocity it might make sense to do this
as a KVM-specific thing, but IIUC very little of this has anything to do
with KVM in particular, right? Is there an expectation to evolve in a
more KVM-specific direction?
(One thing that might be KVM-specific is the concurrency. I assume there
are a bunch of KVM tests that are pretty isolated from one another and
reasonable to run in parallel. Testing _the_ mm like that just isn't
gonna work most of the time. I still think this is really specific to
individual sets of tests though, in a more mature system there would be
a metadata mechanism for marking tests as parallelisable wrt each other.
I guess this patchset is part of an effort to have a more mature system
that enables that kind of thing.).
To avoid confusing people and potentially leave the door open to a
cleaner integration, please can you add some bits here about how this
relates to the rest of the kselftest infrastructure? Some questions I
think are worth answering:
- As someone who runs KVM selftests, but doesn't work specifically on
KVM, to what extent do I need to know about this tool? Can I still run
the selftests "the old fashioned way" and if so what do I lose as
compared to using the KVM runner?
- Does this system change the "data model" of the selftests at all, and
if so how? I.e. I think (but honestly I'm not sure) that kselftests
are a 2-tier hierarchy of $suite:$test without any further
parameterisation or nesting (where there is more detail, it's hidden
as implementation details of individual $tests). Do the KVM selftests
have this structure? If it differs, how does that effect the view from
run_kselftest.sh?
- I think (again, not very sure) that in kselftest that each $test is a
command executing a process. And this process communicates its status
by printing KTAP and returning an exit code. Is that stuff the same
for this runner?
More information about the kvm-riscv
mailing list