[RFC PATCH 3/3] mm/vmalloc.c: change _vm_unmap_aliases() to do purge firstly
Baoquan He
bhe at redhat.com
Mon May 22 07:34:26 PDT 2023
On 05/22/23 at 02:02pm, Thomas Gleixner wrote:
> On Mon, May 22 2023 at 19:21, Baoquan He wrote:
> > On 05/22/23 at 01:10am, Thomas Gleixner wrote:
> >
> > diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> > index 5ca55b357148..4b11a32df49d 100644
> > --- a/mm/vmalloc.c
> > +++ b/mm/vmalloc.c
> > @@ -1728,6 +1728,7 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end)
> > unsigned int num_purged_areas = 0;
> > struct list_head local_purge_list;
> > struct vmap_area *va, *n_va;
> > + struct vmap_block vb;
> >
> > lockdep_assert_held(&vmap_purge_lock);
> >
> > @@ -1736,6 +1737,14 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end)
> > list_replace_init(&purge_vmap_area_list, &local_purge_list);
> > spin_unlock(&purge_vmap_area_lock);
> >
> > + vb = container_of(va, struct vmap_block, va);
>
> This cannot work vmap_area is not embedded in vmap_block. vmap_block::va
> is a pointer. vmap_area does not link back to vmap_block, so there is no
> way to find it based on a vmap_area.
Oh, the code is buggy. va->flags can tell if it's vmap_block, then we
can deduce the vb pointer.
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 5ca55b357148..73d6ce441351 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1728,6 +1728,7 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end)
unsigned int num_purged_areas = 0;
struct list_head local_purge_list;
struct vmap_area *va, *n_va;
+ struct vmap_block vb;
lockdep_assert_held(&vmap_purge_lock);
@@ -1736,6 +1737,15 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end)
list_replace_init(&purge_vmap_area_list, &local_purge_list);
spin_unlock(&purge_vmap_area_lock);
+ if (va->flags & VMAP_FLAGS_MASK == VMAP_RAM|VMAP_BLOCK) {
+ vb = container_of(va, struct vmap_block, va);
+ if (vb->dirty_max) {
/*This is pseudo code for illustration*/
+ s = vb->dirty_min << PAGE_SHIFT;
+ e = vb->dirty_max << PAGE_SHIFT;
+ }
+ kfree(vb);
+ }
+
if (unlikely(list_empty(&local_purge_list)))
goto out;
@@ -2083,7 +2093,6 @@ static void free_vmap_block(struct vmap_block *vb)
spin_unlock(&vmap_area_lock);
free_vmap_area_noflush(vb->va);
- kfree_rcu(vb, rcu_head);
}
>
> Aside of that va is not initialized here :)
Oh, this is not real code, just to illustrate how it can calculate and
flush the last two pages of vmap_block. If you have the per va flushing
via array patch, I can work out formal code change based on that.
More information about the linux-arm-kernel
mailing list