Adding a new x86 image or related packages to the default x86 image

Daniel Golle daniel at makrotopia.org
Mon Nov 13 19:44:57 PST 2023


On Mon, Nov 13, 2023 at 06:26:04PM -0800, Elliott Mitchell wrote:
> On Mon, Nov 13, 2023 at 12:48:14PM +0000, Daniel Golle wrote:
> > On Mon, Nov 13, 2023 at 01:30:10PM +0100, Paul Spooren wrote:
> > > 
> > > How about we follow the approach of Alpine Linux[1] and offer a standard, an extended and a virtual firmware for the x86/64 target?
> > > 
> > > What packages specifically is another discussion but the approach could be that standard contains all kmods to get network working on all device, extended includes extra LED drivers etc and virtual only contains network drivers for, well, virtual things.
> > 
> > +1
> > I like that much more than adding board-specific images on a platform
> > with standardized boot process (such as x86 or armsr).
> 
> Are you stating you're planning to modify OpenWRT's boot process to
> match the standard way of dealing with that standardized boot process?
> Mainly, using a minimal kernel and then using an initial ramdisk to
> load device drivers as appropriate to the hardware.

Using squashfs (which is what we are doing) has actually quite
a similar effect than using initramfs. Filesystem cache of files which
aren't accessed gets freed.

What is missing is hotplug-based loading of kmods based on present
devices -- right now every module present gets loaded and remains
loaded indefinitely even if the hardware isn't present.

> 
> Failing that, I suppose it would be acceptable to have an initial
> ramdisk which simply tried to load all modules.  Then it would be
> possible to remove unneeded modules later.

You can already do that and the effect on memory consumption is
the same as an initrd (which is literally just uncommitted filesystem
cache). The only difference is that the initramfs needs to be
decompressed in one piece which takes a lot of time while squashfs
can be read and decompressed on-the-fly obviously.
And initramfs needs to be explicitely freed (using pivot_root) while
in case of squashfs files can always be loaded from flash and stay in
the filesystem cache in RAM as long as they are being used and the
space isn't needed for anything else.

> 
> 
> The real issue is VMs are unlikely to see devices typically present on
> bare metal computers.  Thermal capabilities?  Nope.  Processor frequency
> selection?  Nope.  Microcode reloading?  Nope.

Here I agree. We should have a 'slim/vm' image without any 'real' hardware
drivers.

> 
> Each hypervisor will have a small set of drivers guaranteed to be
> present.  These though will be completely different between hypervisors.

Do you really think having just the (partially shared) drivers for 3
hypervisors (KVM/virtio, Hyper-V, VMWare) present in one image is too
much? Those drivers are very tiny...

> 
> I don't know whether it is possible to omit all types of framebuffer from
> a Hyper-V VM.  If it isn't possible, then having a framebuffer driver
> will be required for Hyper-V, but this consumes somewhere 0.5-1.5MB on
> any VM which can omit the framebuffer.

Framebuffer support can entirely be built as modules and we can do that
instead of having it built-in on x86.
Feel free to suggest patches.

> 
> Meanwhile Xen's block device driver isn't even based on SCSI.  As a
> result it can completely remove the SCSI subsystem, this saves
> another 0.5-1.5MB on a Xen VM.

Ok, that would really require different kernel builds (as in:
subtargets) and not just different images. One for booting from
all kinds of physical storage and one for booting from virtualized
virtio block or NVMe via IOMMU.

> 
> 10MB might not be much for a 2GB VM.  For a 128MB VM, those excess
> drivers are a distinct burden.
> 
> 
> I've got a WIP patch series for making distinct kernel builds rather
> less of a burden.  The series will need some significant testing.

I understand the additional build resources, maintainance and
debugging efforts can be justified when having a very high number of
identical devices, as it is typically the case only in very large
deployments (think: major ISPs and hotspot providers having in-house
R&D).

However, OpenWrt (the distribution) supports thousands of different
devices, and that becomes possible only because all devices within a
subtarget share use the exact same kernel build. Not just because of
build resources, but also because almost all testing and debugging is
covered in the subtarget level and hence we are talking about a
somehow manageable workload -- one can nearly reproduce and debug
most issues on any of the devices (can be hundreds) on a subtarget
using a single or at most 4 different reference devices.

OpenWrt (the build-system) could offer such a feature for people
wanting to create super-optimizied builds themselves.



More information about the openwrt-devel mailing list