[PATCH v2 1/3] mm-unstable: Multi-gen LRU: Fix per-zone reclaim

AngeloGioacchino Del Regno angelogioacchino.delregno at collabora.com
Wed Aug 2 03:32:35 PDT 2023


Il 02/08/23 04:56, Kalesh Singh ha scritto:
> MGLRU has a LRU list for each zone for each type (anon/file) in each
> generation:
> 
> 	long nr_pages[MAX_NR_GENS][ANON_AND_FILE][MAX_NR_ZONES];
> The min_seq (oldest generation) can progress independently for each
> type but the max_seq (youngest generation) is shared for both anon and
> file. This is to maintain a common frame of reference.
> 
> In order for eviction to advance the min_seq of a type, all the per-zone
> lists in the oldest generation of that type must be empty.
> 
> The eviction logic only considers pages from eligible zones for
> eviction or promotion.
> 
>      scan_folios() {
> 	...
> 	for (zone = sc->reclaim_idx; zone >= 0; zone--)  {
> 	    ...
> 	    sort_folio(); 	// Promote
> 	    ...
> 	    isolate_folio(); 	// Evict
> 	}
> 	...
>      }
> 
> Consider the system has the movable zone configured and default 4
> generations. The current state of the system is as shown below
> (only illustrating one type for simplicity):
> 
> Type: ANON
> 
> 	Zone    DMA32     Normal    Movable    Device
> 
> 	Gen 0       0          0        4GB         0
> 
> 	Gen 1       0        1GB        1MB         0
> 
> 	Gen 2     1MB        4GB        1MB         0
> 
> 	Gen 3     1MB        1MB        1MB         0
> 
> Now consider there is a GFP_KERNEL allocation request (eligible zone
> index <= Normal), evict_folios() will return without doing any work
> since there are no pages to scan in the eligible zones of the oldest
> generation. Reclaim won't make progress until triggered from a ZONE_MOVABLE
> allocation request; which may not happen soon if there is a lot of free
> memory in the movable zone. This can lead to OOM kills, although there
> is 1GB pages in the Normal zone of Gen 1 that we have not yet tried to
> reclaim.
> 
> This issue is not seen in the conventional active/inactive LRU since
> there are no per-zone lists.
> 
> If there are no (not enough) folios to scan in the eligible zones, move
> folios from ineligible zone (zone_index > reclaim_index) to the next
> generation. This allows for the progression of min_seq and reclaiming
> from the next generation (Gen 1).
> 
> Qualcomm, Mediatek and raspberrypi [1] discovered this issue independently.
> 
> [1] https://github.com/raspberrypi/linux/issues/5395
> 
> Fixes: ac35a4902374 ("mm: multi-gen LRU: minimal implementation")
> Cc: stable at vger.kernel.org
> Cc: Yu Zhao <yuzhao at google.com>
> Cc: Andrew Morton <akpm at linux-foundation.org>
> Reported-by: Charan Teja Kalla <quic_charante at quicinc.com>
> Reported-by: Lecopzer Chen <lecopzer.chen at mediatek.com>
> Signed-off-by: Kalesh Singh <kaleshsingh at google.com>

Whole series tested on MT8173 Elm Chromebook and MT6795 Xperia M5 as those are
low ram devices. Can't reproduce the issue described in your [1] link from RPi.

MediaTek:
Tested-by: AngeloGioacchino Del Regno <angelogioacchino.delregno at collabora.com>

> ---
> 
> Changes in v2:
>    - Add Fixes tag and cc stable
> 
>   mm/vmscan.c | 18 ++++++++++++++----
>   1 file changed, 14 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 4039620d30fe..489a4fc7d9b1 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -4889,7 +4889,8 @@ static int lru_gen_memcg_seg(struct lruvec *lruvec)
>    *                          the eviction
>    ******************************************************************************/
>   
> -static bool sort_folio(struct lruvec *lruvec, struct folio *folio, int tier_idx)
> +static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_control *sc,
> +		       int tier_idx)
>   {
>   	bool success;
>   	int gen = folio_lru_gen(folio);
> @@ -4939,6 +4940,13 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, int tier_idx)
>   		return true;
>   	}
>   
> +	/* ineligible */
> +	if (zone > sc->reclaim_idx) {
> +		gen = folio_inc_gen(lruvec, folio, false);
> +		list_move_tail(&folio->lru, &lrugen->folios[gen][type][zone]);
> +		return true;
> +	}
> +
>   	/* waiting for writeback */
>   	if (folio_test_locked(folio) || folio_test_writeback(folio) ||
>   	    (type == LRU_GEN_FILE && folio_test_dirty(folio))) {
> @@ -4987,7 +4995,8 @@ static bool isolate_folio(struct lruvec *lruvec, struct folio *folio, struct sca
>   static int scan_folios(struct lruvec *lruvec, struct scan_control *sc,
>   		       int type, int tier, struct list_head *list)
>   {
> -	int gen, zone;
> +	int i;
> +	int gen;
>   	enum vm_event_item item;
>   	int sorted = 0;
>   	int scanned = 0;
> @@ -5003,9 +5012,10 @@ static int scan_folios(struct lruvec *lruvec, struct scan_control *sc,
>   
>   	gen = lru_gen_from_seq(lrugen->min_seq[type]);
>   
> -	for (zone = sc->reclaim_idx; zone >= 0; zone--) {
> +	for (i = MAX_NR_ZONES; i > 0; i--) {
>   		LIST_HEAD(moved);
>   		int skipped = 0;
> +		int zone = (sc->reclaim_idx + i) % MAX_NR_ZONES;
>   		struct list_head *head = &lrugen->folios[gen][type][zone];
>   
>   		while (!list_empty(head)) {
> @@ -5019,7 +5029,7 @@ static int scan_folios(struct lruvec *lruvec, struct scan_control *sc,
>   
>   			scanned += delta;
>   
> -			if (sort_folio(lruvec, folio, tier))
> +			if (sort_folio(lruvec, folio, sc, tier))
>   				sorted += delta;
>   			else if (isolate_folio(lruvec, folio, sc)) {
>   				list_add(&folio->lru, list);




More information about the linux-arm-kernel mailing list