[PATCH v11 55/69] mm/memcontrol: stop using mm->highest_vm_end

Liam Howlett liam.howlett at oracle.com
Sat Jul 16 19:46:55 PDT 2022


From: "Liam R. Howlett" <Liam.Howlett at Oracle.com>

Pass through ULONG_MAX instead.

Link: https://lkml.kernel.org/r/20220504011345.662299-40-Liam.Howlett@oracle.com
Link: https://lkml.kernel.org/r/20220621204632.3370049-56-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett at Oracle.com>
Cc: Catalin Marinas <catalin.marinas at arm.com>
Cc: David Howells <dhowells at redhat.com>
Cc: "Matthew Wilcox (Oracle)" <willy at infradead.org>
Cc: SeongJae Park <sj at kernel.org>
Cc: Vlastimil Babka <vbabka at suse.cz>
Cc: Will Deacon <will at kernel.org>
Cc: Davidlohr Bueso <dave at stgolabs.net>
Signed-off-by: Andrew Morton <akpm at linux-foundation.org>
---
 mm/memcontrol.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 655c09393ad5..d8e1b9ff72e6 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -5833,7 +5833,7 @@ static unsigned long mem_cgroup_count_precharge(struct mm_struct *mm)
 	unsigned long precharge;
 
 	mmap_read_lock(mm);
-	walk_page_range(mm, 0, mm->highest_vm_end, &precharge_walk_ops, NULL);
+	walk_page_range(mm, 0, ULONG_MAX, &precharge_walk_ops, NULL);
 	mmap_read_unlock(mm);
 
 	precharge = mc.precharge;
@@ -6131,9 +6131,7 @@ static void mem_cgroup_move_charge(void)
 	 * When we have consumed all precharges and failed in doing
 	 * additional charge, the page walk just aborts.
 	 */
-	walk_page_range(mc.mm, 0, mc.mm->highest_vm_end, &charge_walk_ops,
-			NULL);
-
+	walk_page_range(mc.mm, 0, ULONG_MAX, &charge_walk_ops, NULL);
 	mmap_read_unlock(mc.mm);
 	atomic_dec(&mc.from->moving_account);
 }
-- 
2.35.1



More information about the maple-tree mailing list