[RFC PATCH 0/3] mm/zsmalloc: reduce lock contention in zs_free()
Wenchao Hao
haowenchao22 at gmail.com
Thu May 7 23:19:07 PDT 2026
Swap freeing can be expensive when unmapping a VMA containing many swap
entries. This has been reported to significantly delay memory reclamation
during Android's low-memory killing, especially when multiple processes
are terminated to free memory, with slot_free() accounting for more than
80% of the total cost of freeing swap entries.
Lock contention in zs_free() is a major contributor to this cost:
- pool->lock (rwlock) read-side atomic operations become expensive
under multi-process concurrency due to cacheline bouncing
- class->lock held across zspage page freeing causes zone->lock
contention to propagate back
This series addresses both issues:
Patch 1: Encode class_idx in obj value
On 64-bit systems, OBJ_INDEX_BITS is over-provisioned. We split it
into class_idx + obj_idx so that zs_free() can determine the correct
size_class from the obj value alone, without needing pool->lock.
Patch 2: Remove pool->lock from zs_free()
With class_idx available from the obj encoding, zs_free() acquires
only class->lock (re-reading obj for a stable PFN). This eliminates
rwlock read-side contention between concurrent zs_free() calls and
page migration/compaction.
Patch 3: Drop class->lock before freeing zspage pages
Move the actual page release (free_zspage) outside class->lock. The
bookkeeping is done under the lock, but buddy allocator interaction
(zone->lock) no longer nests inside class->lock.
Performance results:
Test: each process independently mmap 256MB, write data, madvise
MADV_PAGEOUT to swap out via zram (lzo-rle), then concurrent munmap.
Raspberry Pi 4B (4-core ARM64 Cortex-A72):
mode Base Patched Speedup
single 59.0ms 56.0ms 1.05x
multi 2p 94.6ms 66.7ms 1.42x
multi 4p 202.9ms 110.6ms 1.83x
x86 (20-core Intel i7-12700, 16 concurrent processes):
mode Base Patched Speedup
single 11.7ms 9.8ms 1.19x
multi 2p 24.1ms 17.2ms 1.40x
multi 4p 63.0ms 45.3ms 1.39x
Single-process shows modest improvement. With multiple processes,
each read_lock/read_unlock atomically modifies the shared rwlock
reader count, and the cost of these atomic operations increases
with more CPUs accessing the same cacheline concurrently.
Eliminating pool->lock removes this overhead entirely.
Patch 1-2 only work on 64-bit systems (gated by ZS_OBJ_CLASS_IDX);
32-bit falls back to the original pool->lock path. Patch 3 benefits
all architectures.
Wenchao Hao (2):
mm/zsmalloc: encode class index in obj value for lockless class lookup
mm/zsmalloc: remove pool->lock from zs_free on 64-bit systems
Xueyuan Chen (1):
mm/zsmalloc: drop class lock before freeing zspage
mm/zsmalloc.c | 146 ++++++++++++++++++++++++++++++++++++++++++++------
1 file changed, 131 insertions(+), 15 deletions(-)
--
2.34.1
More information about the linux-riscv
mailing list