[PATCH v6 6/9] mm: multigenerational lru: aging

Michal Hocko mhocko at suse.com
Mon Jan 10 08:25:15 PST 2022


On Mon 10-01-22 17:01:07, Vlastimil Babka wrote:
> On 1/10/22 16:01, Michal Hocko wrote:
> > On Thu 06-01-22 17:12:18, Michal Hocko wrote:
> >> On Tue 04-01-22 13:22:25, Yu Zhao wrote:
> >> > +static struct lru_gen_mm_walk *alloc_mm_walk(void)
> >> > +{
> >> > +	if (!current->reclaim_state || !current->reclaim_state->mm_walk)
> >> > +		return kvzalloc(sizeof(struct lru_gen_mm_walk), GFP_KERNEL);
> > 
> > One thing I have overlooked completely. You cannot really use GFP_KERNEL
> > allocation here because the reclaim context can be constrained (e.g.
> > GFP_NOFS). This allocation will not do any reclaim as it is PF_MEMALLOC
> > but I suspect that the lockdep will complain anyway.
> > 
> > Also kvmalloc is not really great here. a) vmalloc path is never
> > executed for small objects and b) we do not really want to make a
> > dependency between vmalloc and the reclaim (by vmalloc -> reclaim ->
> > vmalloc).
> > 
> > Even if we rule out vmalloc and look at kmalloc alone. Is this really
> > safe? I do not see any recursion prevention in the SL.B code. Maybe this
> > just happens to work but the dependency should be really documented so
> > that future SL.B changes won't break the whole scheme. 
> 
> Slab implementations drop all locks before calling into page allocator (thus
> possibly reclaim) so slab itself should be fine and I don't expect it to
> change. But we could eventually reach the page allocator recursively again,
> that's true and not great.

Thanks for double checking. If recursion is really intended and
something SL.B allocators should support then this is definitely worth
documenting so that a subtle change won't break in the future. 

-- 
Michal Hocko
SUSE Labs



More information about the linux-arm-kernel mailing list