[PATCH v6 6/9] mm: multigenerational lru: aging

Michal Hocko mhocko at suse.com
Tue Jan 11 01:00:32 PST 2022


On Mon 10-01-22 18:18:55, Yu Zhao wrote:
> On Mon, Jan 10, 2022 at 04:35:46PM +0100, Michal Hocko wrote:
> > On Fri 07-01-22 16:36:11, Yu Zhao wrote:
> > > On Fri, Jan 07, 2022 at 02:11:29PM +0100, Michal Hocko wrote:
> > > > On Tue 04-01-22 13:22:25, Yu Zhao wrote:
> > > > [...]
> > > > > +static void lru_gen_age_node(struct pglist_data *pgdat, struct scan_control *sc)
> > > > > +{
> > > > > +	struct mem_cgroup *memcg;
> > > > > +	bool success = false;
> > > > > +	unsigned long min_ttl = READ_ONCE(lru_gen_min_ttl);
> > > > > +
> > > > > +	VM_BUG_ON(!current_is_kswapd());
> > > > > +
> > > > > +	current->reclaim_state->mm_walk = &pgdat->mm_walk;
> > > > > +
> > > > > +	memcg = mem_cgroup_iter(NULL, NULL, NULL);
> > > > > +	do {
> > > > > +		struct lruvec *lruvec = mem_cgroup_lruvec(memcg, pgdat);
> > > > > +
> > > > > +		if (age_lruvec(lruvec, sc, min_ttl))
> > > > > +			success = true;
> > > > > +
> > > > > +		cond_resched();
> > > > > +	} while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)));
> > > > > +
> > > > > +	if (!success && mutex_trylock(&oom_lock)) {
> > > > > +		struct oom_control oc = {
> > > > > +			.gfp_mask = sc->gfp_mask,
> > > > > +			.order = sc->order,
> > > > > +		};
> > > > > +
> > > > > +		if (!oom_reaping_in_progress())
> > > > > +			out_of_memory(&oc);
> > > > > +
> > > > > +		mutex_unlock(&oom_lock);
> > > > > +	}
> > > > 
> > > > Why do you need to trigger oom killer from this path? Why cannot you
> > > > rely on the page allocator to do that like we do now?
> > > 
> > > This is per desktop users' (repeated) requests. The can't tolerate
> > > thrashing as servers do because of UI lags; and they usually don't
> > > have fancy tools like oomd.
> > > 
> > > Related discussions I saw:
> > > https://github.com/zen-kernel/zen-kernel/issues/218
> > > https://lore.kernel.org/lkml/20101028191523.GA14972@google.com/
> > > https://lore.kernel.org/lkml/20211213051521.21f02dd2@mail.inbox.lv/
> > > https://lore.kernel.org/lkml/54C2C89C.8080002@gmail.com/
> > > https://lore.kernel.org/lkml/d9802b6a-949b-b327-c4a6-3dbca485ec20@gmx.com/
> > 
> > I do not really see any arguments why an userspace based trashing
> > detection cannot be used for those. Could you clarify?
> 
> It definitely can be done. But who is going to do it for every distro
> and all individual users? AFAIK, not a single distro provides such a
> solution for desktop/laptop/phone users.

If existing interfaces provides sufficient information to make those
calls then I would definitely prefer a userspace solution.

> And also there is the theoretical question how reliable a userspace
> solution can be. What if this usespace solution itself gets stuck in
> the direct reclaim path. I'm sure if nobody has done some search to
> prove or debunk it.

I have to confess I haven't checked oomd or other solutions but with a
sufficient care (all the code mlocked in + no allocations done while
collecting data) I believe this should be achieveable.

> In addition, what exactly PSI values should be used on different
> models of consumer electronics? Nobody knows. We have a team working
> on this and we haven't figured it out for all our Chromebook models.

I believe this is a matter of tuning for a specific deployment. We do
not have only psi but also refault counters that can be used.

> As Andrew said, "a blunt instrument like this would be useful".
> https://lore.kernel.org/lkml/20211202135824.33d2421bf5116801cfa2040d@linux-foundation.org/
> 
> I'd like to have less code in kernel too, but I've learned never to
> walk over users. If I remove this and they come after me asking why,
> I'd have a hard time convincing them.
> 
> > Also my question was pointing to why out_of_memory is called from the
> > reclaim rather than the allocator (memcg charging path). It is the
> > caller of the reclaim to control different reclaim strategies and tell
> > when all the hopes are lost and the oom killer should be invoked. This
> > allows for a different policies at the allocator level and this change
> > will break that AFAICS. E.g. what if the underlying allocation context
> > is __GFP_NORETRY?
> 
> This is called in kswapd only, and by default (min_ttl=0) it doesn't
> do anything. So __GFP_NORETRY doesn't apply.

My bad. I must have got lost when traversing the code but I can see you
are enforcing that by a VM_BUG_ON. So the limited scope reclaim is not a
problem indeed.

> The question would be
> more along the lines of long-term ABI support.
> 
> And I'll add the following comments, if you think we can keep this
> logic:
>    OOM kill if every generation from all memcgs is younger than min_ttl.
>    Another theoretical possibility is all memcgs are either below min or
>    ineligible at priority 0, but this isn't the main goal.
> 
> (Please read my reply at the bottom to decide whether we should keep
>  it or not. Thanks.)
> 
> > > >From patch 8:
> > >   Personal computers
> > >   ------------------
> > >   :Thrashing prevention: Write ``N`` to
> > >    ``/sys/kernel/mm/lru_gen/min_ttl_ms`` to prevent the working set of
> > >    ``N`` milliseconds from getting evicted. The OOM killer is invoked if
> > >    this working set can't be kept in memory. Based on the average human
> > >    detectable lag (~100ms), ``N=1000`` usually eliminates intolerable
> > >    lags due to thrashing. Larger values like ``N=3000`` make lags less
> > >    noticeable at the cost of more OOM kills.
> > 
> > This is a very good example of something that should be a self contained
> > patch with its own justification.
> 
> Consider it done.
> 
> > TBH it is really not all that clear to
> > me that we want to provide any user visible knob to control OOM behavior
> > based on a time based QoS.
> 
> Agreed, and it didn't exist until v4, i.e., after I was demanded to
> provide it for several times.
> 
> For example:
> https://github.com/zen-kernel/zen-kernel/issues/223
> 
> And another example:
>    Your Multigenerational LRU patchset is pretty complex and
>    effective, but does not eliminate thrashing condition fully on an
>    old PCs with slow HDD.
> 
>    I'm kindly asking you to cooperate with hakavlad if it's possible
>    and maybe re-implement parts of le9 patch in your patchset wherever
>    acceptable, as they are quite similar in the core concept.
> 
> This is excerpt of an email from iam at valdikss.org.ru, and he has
> posted demo videos in this discussion:
> https://lore.kernel.org/lkml/2dc51fc8-f14e-17ed-a8c6-0ec70423bf54@valdikss.org.ru/

That is all an interesting feedback but we should be really craful about
ABI constrains and future maintainability of the knob. I still stand
behind my statement that kernel should implement such features only if
it is clear that we cannot really implement a similar logic in the
userspace.

-- 
Michal Hocko
SUSE Labs



More information about the linux-arm-kernel mailing list