[PATCH v6 4/9] mm: multigenerational lru: groundwork

Aneesh Kumar K.V aneesh.kumar at linux.ibm.com
Tue Jan 11 00:16:24 PST 2022


Yu Zhao <yuzhao at google.com> writes:

.....

 +
> +/*
> + * Evictable pages are divided into multiple generations. The youngest and the
> + * oldest generation numbers, max_seq and min_seq, are monotonically increasing.
> + * They form a sliding window of a variable size [MIN_NR_GENS, MAX_NR_GENS]. An
> + * offset within MAX_NR_GENS, gen, indexes the lru list of the corresponding
> + * generation. The gen counter in folio->flags stores gen+1 while a page is on
> + * lrugen->lists[]. Otherwise, it stores 0.
> + *
> + * A page is added to the youngest generation on faulting. The aging needs to
> + * check the accessed bit at least twice before handing this page over to the
> + * eviction. The first check takes care of the accessed bit set on the initial
> + * fault; the second check makes sure this page hasn't been used since then.
> + * This process, AKA second chance, requires a minimum of two generations,
> + * hence MIN_NR_GENS. And to be compatible with the active/inactive lru, these
> + * two generations are mapped to the active; the rest of generations, if they
> + * exist, are mapped to the inactive. PG_active is always cleared while a page
> + * is on lrugen->lists[] so that demotion, which happens consequently when the
> + * aging creates a new generation, needs not to worry about it.
> + */

Where do we clear PG_active in the code? Is this the reason we endup
with

  void deactivate_page(struct page *page)
  {
 -	if (PageLRU(page) && PageActive(page) && !PageUnevictable(page)) {
 +	if (PageLRU(page) && !PageUnevictable(page) && (PageActive(page) || lru_gen_enabled())) {




> +#define MIN_NR_GENS		2U
> +#define MAX_NR_GENS		((unsigned int)CONFIG_NR_LRU_GENS)
> +
> +struct lru_gen_struct {
> +	/* the aging increments the youngest generation number */
> +	unsigned long max_seq;
> +	/* the eviction increments the oldest generation numbers */
> +	unsigned long min_seq[ANON_AND_FILE];
> +	/* the birth time of each generation in jiffies */
> +	unsigned long timestamps[MAX_NR_GENS];
> +	/* the multigenerational lru lists */
> +	struct list_head lists[MAX_NR_GENS][ANON_AND_FILE][MAX_NR_ZONES];
> +	/* the sizes of the above lists */
> +	unsigned long nr_pages[MAX_NR_GENS][ANON_AND_FILE][MAX_NR_ZONES];
> +	/* whether the multigenerational lru is enabled */
> +	bool enabled;
> +};
> +

....

>  static void __meminit zone_init_internals(struct zone *zone, enum zone_type idx, int nid,
> diff --git a/mm/swap.c b/mm/swap.c
> index e8c9dc6d0377..d7dde3b7d4b5 100644
> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -462,6 +462,11 @@ void folio_add_lru(struct folio *folio)
>  	VM_BUG_ON_FOLIO(folio_test_active(folio) && folio_test_unevictable(folio), folio);
>  	VM_BUG_ON_FOLIO(folio_test_lru(folio), folio);
>  
> +	/* see the comment in lru_gen_add_folio() */
> +	if (lru_gen_enabled() && !folio_test_unevictable(folio) &&
> +	    task_in_lru_fault() && !(current->flags & PF_MEMALLOC))
> +		folio_set_active(folio);
> +


Can you explain this better? What is the significance of marking the
folio active here. Do we need to differentiate parallel page faults (across
different vmas) w.r.t task_in_lru_fault()?


>  	folio_get(folio);
>  	local_lock(&lru_pvecs.lock);
>  	pvec = this_cpu_ptr(&lru_pvecs.lru_add);
> @@ -563,7 +568,7 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec)
>  



More information about the linux-arm-kernel mailing list