[PATCH v5] um: Enable preemption in UML
Johannes Berg
johannes at sipsolutions.net
Fri Sep 22 02:51:05 PDT 2023
On Fri, 2023-09-22 at 10:19 +0100, Anton Ivanov wrote:
> >
> > So maybe that works - perhaps with a big comment?
>
> Ack. Will add this to the patch and run it through its paces.
>
We can also just get rid of it entirely:
diff --git a/arch/um/include/asm/mmu_context.h b/arch/um/include/asm/mmu_context.h
index 68e2eb9cfb47..23dcc914d44e 100644
--- a/arch/um/include/asm/mmu_context.h
+++ b/arch/um/include/asm/mmu_context.h
@@ -13,8 +13,6 @@
#include <asm/mm_hooks.h>
#include <asm/mmu.h>
-extern void force_flush_all(void);
-
#define activate_mm activate_mm
static inline void activate_mm(struct mm_struct *old, struct mm_struct *new)
{
diff --git a/arch/um/kernel/process.c b/arch/um/kernel/process.c
index 6daffb9d8a8d..a024acd6d85c 100644
--- a/arch/um/kernel/process.c
+++ b/arch/um/kernel/process.c
@@ -25,7 +25,6 @@
#include <linux/threads.h>
#include <linux/resume_user_mode.h>
#include <asm/current.h>
-#include <asm/mmu_context.h>
#include <linux/uaccess.h>
#include <as-layout.h>
#include <kern_util.h>
@@ -139,8 +138,6 @@ void new_thread_handler(void)
/* Called magically, see new_thread_handler above */
void fork_handler(void)
{
- force_flush_all();
-
schedule_tail(current->thread.prev_sched);
/*
diff --git a/arch/um/kernel/skas/mmu.c b/arch/um/kernel/skas/mmu.c
index 656fe16c9b63..f3766dbbc4ee 100644
--- a/arch/um/kernel/skas/mmu.c
+++ b/arch/um/kernel/skas/mmu.c
@@ -30,10 +30,7 @@ int init_new_context(struct task_struct *task, struct mm_struct *mm)
from_mm = ¤t->mm->context;
block_signals_trace();
- if (from_mm)
- to_mm->id.u.pid = copy_context_skas0(stack,
- from_mm->id.u.pid);
- else to_mm->id.u.pid = start_userspace(stack);
+ to_mm->id.u.pid = start_userspace(stack);
unblock_signals_trace();
if (to_mm->id.u.pid < 0) {
diff --git a/arch/um/kernel/tlb.c b/arch/um/kernel/tlb.c
index 34ec8e677fb9..ce7fb5a34f0f 100644
--- a/arch/um/kernel/tlb.c
+++ b/arch/um/kernel/tlb.c
@@ -599,15 +599,3 @@ void flush_tlb_mm(struct mm_struct *mm)
for_each_vma(vmi, vma)
fix_range(mm, vma->vm_start, vma->vm_end, 0);
}
-
-void force_flush_all(void)
-{
- struct mm_struct *mm = current->mm;
- struct vm_area_struct *vma;
- VMA_ITERATOR(vmi, mm, 0);
-
- mmap_read_lock(mm);
- for_each_vma(vmi, vma)
- fix_range(mm, vma->vm_start, vma->vm_end, 1);
- mmap_read_unlock(mm);
-}
I think that _might_ be slower for fork() happy workloads, since we
don't init from skas0, but I'm not even sure what's _in_ skas0 at that
point? Does it matter?
johannes
More information about the linux-um
mailing list