[RFC v8 00/20] Unifying LKL into UML

Hajime Tazaki thehajime at gmail.com
Tue Mar 16 01:17:15 GMT 2021


Hello,

First of all, thanks for all the comments to the patchset which has
been a bit stale.  I'll reply them.

On Mon, 15 Mar 2021 06:03:19 +0900,
Johannes Berg wrote:
> 
> Hi,
> 
> So I'm still a bit lost here with this, and what exactly you're doing in
> places.
> 
> For example, you simulate a single CPU ("depends on !SMP", and anyway
> UML only supports that right now), yet on the other hand do a *LOT* of
> extra work with lkl_sem, lkl_thread, lkl_mutex, and all that. It's not
> clear to me why? Are you trying to model kernel threads as actual
> userspace pthreads, but then run only one at a time by way of exclusive
> locking?
> 
> I think we probably need a bit more architecture introduction here in
> the cover letter or the documentation patch. The doc patch basically
> just explains what it does, but not how it does anything, or why it was
> done in this way.

We didn't write down the details, which are already described in the
LKL's paper (*1).  But I think we can extract/summarize some of
important information from the paper to the document so that the
design is more understandable.

*1 LKL's paper (pointer is also in the cover letter)
https://www.researchgate.net/profile/Nicolae_Tapus2/publication/224164682_LKL_The_Linux_kernel_library/links/02bfe50fd921ab4f7c000000.pdf

> For example, I'm asking myself:
>  * Why NOMMU? UML doesn't really do _much_ with memory protection unless
>    you add userspace, which you don't have.


My interpretation of MMU/NOMMU is like this;

With (emulated) MMU architecture you will have more smooth integration
with other subsystems of kernel tree, because some subsystems/features
are written with "#ifdef CONFIG_MMU".  While NOMMU doesn't, it will
bring a simplified design with better portability.

LKL takes rather to benefit better portability.

>  * Why pthreads and all? You already require jump_buf, so UML's
>    switch_threads() ought to be just fine for scheduling? It almost
>    seems like you're doing this just so you can serialize against "other
>    threads" (application threads), but wouldn't that trivially be
>    handled by the application? You could let it hook into switch_to() or
>    something, but why should a single "LKL" CPU ever require multiple
>    threads? Seems to me that the userspace could be required to
>    "lkl_run()" or so (vs. lkl_start()). Heck, you could even exit
>    lkl_run() every time you switch tasks in the kernel, and leave
>    scheduling the kernel vs. the application entirely up to the
>    application? (A trivial application would be simply doing something
>    like "while (1) { lkl_run(); pause(); }" mimicking the idle loop of
>    UML.

There is a description about this design choice in the LKL paper (*1);

  "implementations based on setjmp - longjmp require usage of a single
  stack space partitioned between all threads. As the Linux kernel
  uses deep stacks (especially in the VFS layer), in an environment
  with small stack sizes (e.g. inside another operating system's
  kernel) this will place a very low limit on the number of possible
  threads."

(from page 2, Section II, 2) Thread Support)

This is a reason of using pthread as a context primitive.

And instead of manually doing lkl_run() to schedule threads and
relying on host scheduler, LKL associates each kernel thread with a
host-provided semaphore so that Linux scheduler has a control of host
scheduler (prepared by pthread).

This is also described (and hasn't changed since then) in the paper *1
(from page 2, Section II, 3) Thread Switching).

> And - kind of the theme behind all these questions - why is this not
> making UML actually be a binary that uses LKL? If the design were like
> what I'm alluding to above, that should actually be possible? Why should
> it not be possible? Why would it not be desirable? (I'm actually
> thinking that might be really useful to some of the things I'm doing.)
> Yes, if the application actually supports userspace running then it has
> som limitations on what it can do (in particular wrt. signals etc.), but
> that could be documented and would be OK?

Let me try to describe how I think why not just generate liblinux.so
from current UML.

Making UML to build a library, which has been a long wanted features,
can be started;


I think there are several functions which the library offers;

- applications can link the library and call functions in the library
- the library will be used as a replacement of libc.a for syscall operations

to design that with UML, what we need to do are;

1) change Makefile to output liblinux.a
we faced linker script issue, which is related with generating
relocatable object in the middle.

2) make the linker-script clean with 2-stage build
we fix the linker issues of (1)

3) expose syscall as a function call
conflicts names (link-time and compile-time conflicts)

4) header rename, object localization
to fix the issue (3)

This is a common set of modifications to a library of UML.

Other parts are a choice of design, I believe.
Because a library is more _reusable_ than an executable (by it means), the
choice of LKL is to be portable, which the current UML doesn't pursue it
extensibly (focus on intel platforms).  Thus, 

5) memory: NOMMU
6) schedule (of irq/thread): pthread-based rather than setjmp/longjmp


Implementing with alternate options 5) and 6) (MMU, jmpbuf) diminishes
the strength of LKL, which we would like to avoid.  But as you
mentioned, nothing prevents us to implement the alternate options 5)
and 6) so, we can share the common part (1-4) if we will start to
implement.

I hope this makes it a bit clear, but let me know if you found
anything unclear.

-- Hajime




More information about the linux-um mailing list