[RFC] A change to the way packages are built

David Lang david at lang.hm
Wed May 4 18:20:31 PDT 2016


On Wed, 4 May 2016, Daniel Dickinson wrote:

>>> Thoughts?
>>
>> The problem I expect you to run into is dependencies between packages.
>
> Yes, although I primarily thinking of it from the package failed because
> a package it depends on fails or has default configuration that doesn't
> work with the package. (i.e. the reverse of what you mention).
>
>> You really want to have multiple stages
>>
>> 1. compile the package in isolation (avoid any "won't build" or "won't
>> pass self-test" situations)
>
> This is definitely one of the major reasons I want to do this - instead
> of having huge logs from the buildbots to wade through to find problem
> (plus having to build *everything* to get the buildbot logs), a subset
> if easily found errors, which could show up quickly (i.e. kind of like
> launchpad's PPA system where you submit a package to build and pretty
> quickly see if there is an issue; although IMO if builds fail
> egregiously (i.e. committer/patcher would have caught it if they had
> actually built on at least one arch) then the committer would have
> explain why and if it happens 'too often' then potential suspension of
> commit privileges); it should be a substitute for testing, more to catch
> archs or configuration options the patcher didn't realize needed to be
> tried).
>
>>
>> 2. comile all the packages that depend on this package, see if their
>> self-tests still work.
>
> Self-test part only works if packages have self-tests and you're
> building on the same arch as the pacakge is for, but at least verifying
> the the dependant packages compile would be a good step.
>
>>
>> 3. compile and test everything together to catch conflicts between
>> package A and package B both making changes that end up conflicting when
>> building package C.
>
> Ok, I can see that this is a case for at least have full builds of
> everything even if not as frequently as without this.
>
>>
>> automated tests for #1 would be a good start.
>
> Agreed (although not so much the self-tests since mostly we're
> cross-compiling and actually running the code is not normally an option).

Debian has the pbuilder mechansim where you define a base image to use (which 
can point at a repo of packages) and then your builds take this image, install 
the things needed to build, build, save the results, and then when they exit, 
all changes to the image are thrown away.

This sort of thing would let you build any package against all the 'default' 
versions (which could be either the last release, or last night's builds, 
depending on what repo you point at)

The debian pbuilder tool lets you compile for many different versions of 
Debian/Ubuntu on a single system, so something along those lines should be able 
to do the cross-compile environments as needed.

It's also possible to fire it up with an extra local repo, so when you are 
building several packages that are intertwined, they will use the new versions 
of the other packages you are building.

I use this when testing rsyslog and related library updates.


If something like this can be put together, I would require that everything 
either be explicitly approved, or pass a "unit-test" stand-alone build before it 
gets thrown in the hopper for an attempt at a combined build.



For the combined build, in an ideal world I'd try to build with everything 
submitted, and then bisect down to find the item that breaks it if the combined 
build doesn't work.


If it's possible to auto-flash/pxeboot some devices to run the new build and do 
real tests on them, so much the better.

David Lang



More information about the Lede-dev mailing list