Adding a new x86 image or related packages to the default x86 image

Elliott Mitchell ehem+openwrt at m5p.com
Mon Nov 13 21:44:02 PST 2023


On Tue, Nov 14, 2023 at 03:44:57AM +0000, Daniel Golle wrote:
> On Mon, Nov 13, 2023 at 06:26:04PM -0800, Elliott Mitchell wrote:
> > On Mon, Nov 13, 2023 at 12:48:14PM +0000, Daniel Golle wrote:
> > > On Mon, Nov 13, 2023 at 01:30:10PM +0100, Paul Spooren wrote:
> > > > 
> > > > How about we follow the approach of Alpine Linux[1] and offer a standard, an extended and a virtual firmware for the x86/64 target?
> > > > 
> > > > What packages specifically is another discussion but the approach could be that standard contains all kmods to get network working on all device, extended includes extra LED drivers etc and virtual only contains network drivers for, well, virtual things.
> > > 
> > > +1
> > > I like that much more than adding board-specific images on a platform
> > > with standardized boot process (such as x86 or armsr).
> > 
> > Are you stating you're planning to modify OpenWRT's boot process to
> > match the standard way of dealing with that standardized boot process?
> > Mainly, using a minimal kernel and then using an initial ramdisk to
> > load device drivers as appropriate to the hardware.
> 
> Using squashfs (which is what we are doing) has actually quite
> a similar effect than using initramfs. Filesystem cache of files which
> aren't accessed gets freed.
> 
> What is missing is hotplug-based loading of kmods based on present
> devices -- right now every module present gets loaded and remains
> loaded indefinitely even if the hardware isn't present.

First, an initial ramdisk allows the kernel to not include any block
drivers, but instead load them during boot.  ie a VM build could include
drivers for interacting with every hypervisor, but only load the ones
for the hypervisor in use.

Second, while suboptimal having those drivers as modules allows them to
be unloaded.  If the drivers for every hypervisor were unconditionally
loaded, the inappropriate ones might be unloaded by /etc/rc.local.


> > Each hypervisor will have a small set of drivers guaranteed to be
> > present.  These though will be completely different between hypervisors.
> 
> Do you really think having just the (partially shared) drivers for 3
> hypervisors (KVM/virtio, Hyper-V, VMWare) present in one image is too
> much? Those drivers are very tiny...

Permanently built into the kernel?  Not acceptable.

Having a single shared initial ramdisk which always loads all modules
for every hypervisor?  Acceptable.

Tiny is relative.  For a bare-metal computer with 128GB of memory, 10MB
isn't going to make too much difference.  For a 128MB VM, 10MB does make
a significant difference.


> > 10MB might not be much for a 2GB VM.  For a 128MB VM, those excess
> > drivers are a distinct burden.
> > 
> > 
> > I've got a WIP patch series for making distinct kernel builds rather
> > less of a burden.  The series will need some significant testing.

> However, OpenWrt (the distribution) supports thousands of different
> devices, and that becomes possible only because all devices within a
> subtarget share use the exact same kernel build. Not just because of
> build resources, but also because almost all testing and debugging is
> covered in the subtarget level and hence we are talking about a
> somehow manageable workload -- one can nearly reproduce and debug
> most issues on any of the devices (can be hundreds) on a subtarget
> using a single or at most 4 different reference devices.

Hmm.

As stated above, having everything as a module is acceptable to me.  The
issue here is there isn't much use of modules in OpenWRT.


On Mon, Nov 13, 2023 at 09:33:38PM -0700, Philip Prindeville wrote:
> 
> > On Nov 13, 2023, at 7:26 PM, Elliott Mitchell <ehem+openwrt at m5p.com> wrote:
> > 
> > Each hypervisor will have a small set of drivers guaranteed to be
> > present.  These though will be completely different between hypervisors.
> 
> With KVM and kmod-vfio-pci you can do reverse-pass thru where the host isn't controlling the hardware but the guest is.  I know some people who do this to test WiFi drivers from KVM guests.
> 

I haven't tested this with every hypervisor, but I'm pretty sure every
hypervisor of note can do this.  The configuration steps are different,
but the result is exactly the same.

You're thinking too small.  This can be used for testing, but this is
also entirely feasible for a serious use AP in a VM.

As stated previously, the makers of embedded acces points have been
suggesting pushing server tasks onto your AP.  Yet the opposite is
possible, a server can have all the hardware for a full AP.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg at m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445





More information about the openwrt-devel mailing list