[PATCH v7 05/12] mm: multigenerational LRU: minimal implementation

Yu Zhao yuzhao at google.com
Tue Feb 8 00:33:49 PST 2022


On Tue, Feb 08, 2022 at 01:18:55AM -0700, Yu Zhao wrote:

<snipped>

> diff --git a/mm/Kconfig b/mm/Kconfig
> index 3326ee3903f3..e899623d5df0 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -892,6 +892,50 @@ config ANON_VMA_NAME
>  	  area from being merged with adjacent virtual memory areas due to the
>  	  difference in their name.
>  
> +# multigenerational LRU {
> +config LRU_GEN
> +	bool "Multigenerational LRU"
> +	depends on MMU
> +	# the following options can use up the spare bits in page flags
> +	depends on !MAXSMP && (64BIT || !SPARSEMEM || SPARSEMEM_VMEMMAP)
> +	help
> +	  A high performance LRU implementation for memory overcommit. See
> +	  Documentation/admin-guide/mm/multigen_lru.rst and
> +	  Documentation/vm/multigen_lru.rst for details.
> +
> +config NR_LRU_GENS
> +	int "Max number of generations"
> +	depends on LRU_GEN
> +	range 4 31
> +	default 4
> +	help
> +	  Do not increase this value unless you plan to use working set
> +	  estimation and proactive reclaim to optimize job scheduling in data
> +	  centers.
> +
> +	  This option uses order_base_2(N+1) bits in page flags.
> +
> +config TIERS_PER_GEN
> +	int "Number of tiers per generation"
> +	depends on LRU_GEN
> +	range 2 4
> +	default 4
> +	help
> +	  Do not decrease this value unless you run out of spare bits in page
> +	  flags, i.e., you see the "Not enough bits in page flags" build error.
> +
> +	  This option uses N-2 bits in page flags.

Moved Kconfig to this patch as suggested by:
https://lore.kernel.org/linux-mm/Yd6uHYtjGfgqjDpw@dhcp22.suse.cz/

Added two new macros as requested here:
https://lore.kernel.org/linux-mm/87czkyzhfe.fsf@linux.ibm.com/

<snipped>

> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index d75a5738d1dc..5f0d92838712 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1285,9 +1285,11 @@ static int __remove_mapping(struct address_space *mapping, struct page *page,
>  
>  	if (PageSwapCache(page)) {
>  		swp_entry_t swap = { .val = page_private(page) };
> -		mem_cgroup_swapout(page, swap);
> +
> +		/* get a shadow entry before mem_cgroup_swapout() clears folio_memcg() */
>  		if (reclaimed && !mapping_exiting(mapping))
>  			shadow = workingset_eviction(page, target_memcg);
> +		mem_cgroup_swapout(page, swap);
>  		__delete_from_swap_cache(page, swap, shadow);
>  		xa_unlock_irq(&mapping->i_pages);
>  		put_swap_page(page, swap);
> @@ -2721,6 +2723,9 @@ static void prepare_scan_count(pg_data_t *pgdat, struct scan_control *sc)
>  	unsigned long file;
>  	struct lruvec *target_lruvec;
>  
> +	if (lru_gen_enabled())
> +		return;
> +
>  	target_lruvec = mem_cgroup_lruvec(sc->target_mem_cgroup, pgdat);
>  
>  	/*
> @@ -3042,15 +3047,47 @@ static bool can_age_anon_pages(struct pglist_data *pgdat,
>  
>  #ifdef CONFIG_LRU_GEN
>  
> +enum {
> +	TYPE_ANON,
> +	TYPE_FILE,
> +};

Added two new macros as requested here:
https://lore.kernel.org/linux-mm/87czkyzhfe.fsf@linux.ibm.com/

<snipped>

> +static void age_lruvec(struct lruvec *lruvec, struct scan_control *sc)
> +{
> +	bool need_aging;
> +	long nr_to_scan;
> +	struct mem_cgroup *memcg = lruvec_memcg(lruvec);
> +	int swappiness = get_swappiness(memcg);
> +	DEFINE_MAX_SEQ(lruvec);
> +	DEFINE_MIN_SEQ(lruvec);
> +
> +	mem_cgroup_calculate_protection(NULL, memcg);
> +
> +	if (mem_cgroup_below_min(memcg))
> +		return;

Added mem_cgroup_calculate_protection() for readability as requested here:
https://lore.kernel.org/linux-mm/Ydf9RXPch5ddg%2FWC@dhcp22.suse.cz/

<snipped>



More information about the linux-arm-kernel mailing list