[PATCH 0/8] clk: Add kunit tests for fixed rate and parent data

Frank Rowand frowand.list at gmail.com
Mon Mar 13 08:30:40 PDT 2023


On 3/10/23 01:48, David Gow wrote:
> On Sat, 4 Mar 2023 at 23:50, Frank Rowand <frowand.list at gmail.com> wrote:
>>
>> On 3/1/23 19:38, Stephen Boyd wrote:
>>> This patch series adds unit tests for the clk fixed rate basic type and
>>> the clk registration functions that use struct clk_parent_data. To get
>>> there, we add support for loading a DTB into the UML kernel that's
>>> running the unit tests along with probing platform drivers to bind to
>>> device nodes specified in DT.
>>>
>>> With this series, we're able to exercise some of the code in the common
>>> clk framework that uses devicetree lookups to find parents and the fixed
>>> rate clk code that scans devicetree directly and creates clks. Please
>>> review.
>>
>> I would _really_ like to _not_ have devicetree tests in two locations:
>> DT unittests and kunit tests.
>>
> 

This:

> I agree we don't want to split things up needlessly, but I think there
> is a meaningful distinction between:
> - Testing the DT infrastructure itself (with DT unittests)
> - Testing a driver which may have some interaction with DT (via KUnit)

> 
> So, rather than going for a "devicetree" KUnit suite (unless we wanted
> to port OF_UNITTEST to KUnit, which as you point out, would involve a
> fair bit of reworking), I think the goal is for there to be lots of
> driver test suites, each of which may verify that their specific
> properties can be loaded from the devicetree correctly.
> 
> This is also why I prefer the overlay method, if we can get it to
> work: it makes it clearer that the organisational hierarchy for these
> tests is [driver]->[devicetree], not [devicetree]->[drvier].
> 
>> For my testing, I already build and boot four times on real hardware:
>>
>>   1) no DT unittests
>>   2) CONFIG_OF_UNITTEST
>>   3) CONFIG_OF_UNITTEST
>>      CONFIG_OF_DYNAMIC
>>   4) CONFIG_OF_UNITTEST
>>      CONFIG_OF_DYNAMIC
>>      CONFIG_OF_OVERLAY
>>
>> I really should also be testing the four configurations on UML, but at
>> the moment I am not.
>>
>> I also check for new compile warnings at various warn levels for all
>> four configurations.
>>
>> If I recall correctly, the kunit framework encourages more (many more?)
>> kunit config options to select which test(s) are build for a test run.
>> Someone please correct this paragraph if I am mis-stating.
> 
> We do tend to suggest that there is a separate kconfig option for each
> area being tested (usually one per test suite, but if there are
> several closely related suites, sticking them under a single config
> option isn't a problem.)
> 
> That being said:
> - It's possible (and encouraged) to just test once with all of those
> tests enabled, rather than needing to test every possible combination
> of configs enabled/disabled.
> - (Indeed, this is what we do with .kunitconfig files a lot: they're
> collections of related configs, so you can quickly run, e.g., all DRM
> tests)
> - Because a KUnit test being run is an independent action from it
> being built-in, it's possible to build the tests once and then just
> run different subsets anyway, or possibly run them after boot if
> they're compiled as modules.
> - This of course, depends on two test configs not conflicting with
> each other: obviously if there were some tests which relied on
> OF_OVERLAY=n, and others which require OF_OVERLAY=y, you'd need two
> builds.
> 

And this:

> The bigger point is that, if the KUnit tests are focused on individual
> drivers, rather than the devicetree infrastructure itself, then these
> probably aren't as critical to run on every devicetree change (the DT
> unittests should hopefully catch anything which affects devicetree as
> a whole), but only on tests which affect a specific driver (as they're
> really intended to make sure the drivers are accessing / interacting
> with the DT properly, not that the DT infrastructure functions).

Those two paragraphs are correct, and my original assumption was wrong.

These tests appear to mostly be clock related and only minimally and
indirectly test devicetree functionality.  In more generic terms,
they are driver tests, not devicetree tests.

Thus I withdraw my concern of making the devicetree test environment
more complicated.

> 
> And obviously if this KUnit/devicetree support ends up depending on
> overlays, that means there's no need to test them with overlays
> disabled. :-)
> 
>>
>> Adding devicetree tests to kunit adds additional build and boot cycles
>> and additional test output streams to verify.
>>
>> Are there any issues with DT unittests that preclude adding clk tests
>> into the DT unittests?
>>
> 
> I think at least part of it is that there are already some clk KUnit
> tests, so it's easier to have all of the clk tests behave similarly
> (for the same reasons, alas, as using DT unittests makes it easier to
> keep all of the DT tests in the same place).
> 

> Of course, as DT unittests move to KTAP, and possibly in the future
> are able to make use of more KUnit infrastructure, this should get
> simpler for everyone.

I hope to move DT unitests to create KTAP V2 compatible data as a
first step.

I highly doubt that DT unittests fit the kunit model, but that would
be a question that could be considered after DT unittests move to the
KTAP V2 data format.

> 
> 
> Does that seem sensible?

Yes, thanks for the extra explanations.

> 
> -- David




More information about the linux-um mailing list