[PATCH v3 9/9] KVM: selftests: Provide README.rst for KVM selftests runner

Sean Christopherson seanjc at google.com
Thu Oct 2 18:02:08 PDT 2025


On Thu, Oct 02, 2025, Marc Zyngier wrote:
> On Wed, 01 Oct 2025 18:32:25 +0100,
> > One can run these non-default tests as (assuming current directory is
> > kvm selftests):
> > 
> >   python3 runner -d ./tests
> > 
> > Over the time we will add more of these non-default interesting
> > testcases. One can then run:
> > 
> >   python3 runner -d ./tests ./testcases_default_gen
> 
> That's not what I am complaining about. What you call "configuration"
> seems to just be "random set of parameters for a random test".

Hopefully s/random/interesting, but yes, the design of the runner is specifically
to support running tests with different parameters, and not much more (from a
configuration perspective).

> In practice, your runner does not seem configurable at all. You just
> treat all possible configurations of a single test as different tests.
> 
> My (admittedly very personal) view of what a configuration should be
> is "run this single test with these parameters varying in these
> ranges, for this long".

Ya, but personal preference is precisely why we kept the runner fairly minimal.
The goal is to provide:

 1. A way to upstream non-standard test invocations so that they can be shared
    with others, and to improve the coverage provided when developers just run
    whatever tests are upstream (which probably covers most contributions?).

 2. Provide "basic" functionality so that each developer doesn't have to reinvent
    the wheel.

    E.g. I have a (horrific) bash script to run selftests in parallel, and while
    it works well enough for my purposes, it's far from perfect, e.g. there's no
    timeouts, it's super hard to see what tests are still running, the logging is
    hacky, etc.
 
    The idea with this runner is to deal with those low-level details that are
    painful to implement from scratch, and that generally don't require foisting
    a highly opinionated view on anyone.  E.g. if someone really doesn't want to
    see certain output, or wants to fully serialize tests, it's easy to do so.
   
 3. Tooling that power users (and hoepfully CI?) can build on, e.g. via wrapper
    scripts, or something even fancier, again without having to be too opinionated.

    E.g. thanks to the myraid module params in x86, I run all selftests with 5-6
    different versions of KVM (by unloading and reloading KVM modules).  We
    deliberately chose not to allow specifying module params of sysfs knobs as
    part of the runner, because:

        (a) Handling system-wide changes in a runner gets nasty because of the
            need to express and track dependencies/conflicts.
        (b) It's easy (or should be easy) to query dependencies in selftests.
        (c) Selftests need to query them anyways, e.g. to avoid failure when
            run with a "bad configuration".
        (d) Permuting on system-wide things outside of the runner isn't terribly
            difficult (and often requires elevated privileges).

So yeah, there are definitely limitations, but for the most part they are self-
imposed.  Partly to avoid boiling the ocean in the initial version (e.g. many
tests won't benefit from running with a range of values/parameters), but also so
that we don't end up in a situation where the runner only suits the needs of a
few people, e.g. because it's too opinionated and/or tailored to certain use cases.

I'm definitely not against providing more functionality/flexibility in the future,
but for a first go I'd like to stick to a relatively minimal implementation.



More information about the kvm-riscv mailing list