[PATCH v1 00/13] KVM: arm64: Refactor user_mem_abort() into a state-object model
Fuad Tabba
tabba at google.com
Fri Mar 6 06:02:19 PST 2026
As promised in my recent patch series fixing a couple of urgent bugs in
user_mem_abort() [1], here is the actual refactoring to finally clean up this
monolith.
If you look through the Fixes: history of user_mem_abort(), you will start to
see a very clear pattern of whack-a-mole caused by the sheer size and
complexity of the function. For example:
- We keep leaking struct page references on early error returns because the
cleanup logic is hard to track (e.g., 5f9466b50c1b and the atomic fault leak
I just fixed in the previous series).
- We have had uninitialized memcache pointers (157dbc4a321f) because the
initialization flow jumps around unpredictably.
- We have had subtle TOCTOU and locking boundary bugs (like 13ec9308a857 and
f587661f21eb) because we drop the mmap_read_lock midway through the function
but leave the vma pointer and mmu_seq floating around in the same lexical
scope, tempting people to use them.
The bulk of the work is in the first 6 patches, which perform a strict,
no-logic-change structural refactoring of user_mem_abort() into a clean,
sequential dispatcher.
We introduce a state object, struct kvm_s2_fault, which encapsulates
both the input parameters and the intermediate state. Then,
user_mem_abort() is broken down into focused, standalone helpers:
- kvm_s2_resolve_vma_size(): Determines the VMA shift and page size.
- kvm_s2_fault_pin_pfn(): Handles faulting in the physical page.
- kvm_s2_fault_get_vma_info(): A tightly-scoped sub-helper that isolates the
mmap_read_lock, VMA lookup, and metadata snapshotting.
- kvm_s2_fault_compute_prot(): Computes stage-2 protections and evaluates
permission/execution constraints.
- kvm_s2_fault_map(): Manages the KVM MMU lock, mmu_seq retry loops, MTE, and
the final stage-2 mapping.
This structural change makes the "danger zone" foolproof. By isolating
the mmap_read_lock region inside a tightly-scoped sub-helper
(kvm_s2_fault_get_vma_info), the vma pointer is confined. It snapshots
the required metadata into the kvm_s2_fault structure before dropping
the lock. Because the pointers scope ends when the sub-helper returns,
accessing a stale VMA in the mapping phase is not possible by design.
The remaining patches in are localized cleanup patches. With the logic
finally extracted into digestible helpers, these patches take the
opportunity to streamline struct initialization, drop redundant struct
variables, simplify nested math, and hoist validation checks (like MTE)
out of the lock-heavy mapping phase.
I think that there are still more opportunities to tidy things up some
more, but I'll stop here to see what you think.
Based on Linux 7.0-rc2 and my previous fixes series [1].
[1] https://lore.kernel.org/all/20260304162222.836152-1-tabba@google.com/
Cheers,
/fuad
Fuad Tabba (13):
KVM: arm64: Extract VMA size resolution in user_mem_abort()
KVM: arm64: Introduce struct kvm_s2_fault to user_mem_abort()
KVM: arm64: Extract PFN resolution in user_mem_abort()
KVM: arm64: Isolate mmap_read_lock inside new
kvm_s2_fault_get_vma_info() helper
KVM: arm64: Extract stage-2 permission logic in user_mem_abort()
KVM: arm64: Extract page table mapping in user_mem_abort()
KVM: arm64: Simplify nested VMA shift calculation
KVM: arm64: Remove redundant state variables from struct kvm_s2_fault
KVM: arm64: Simplify return logic in user_mem_abort()
KVM: arm64: Initialize struct kvm_s2_fault completely at declaration
KVM: arm64: Optimize early exit checks in kvm_s2_fault_pin_pfn()
KVM: arm64: Hoist MTE validation check out of MMU lock path
KVM: arm64: Clean up control flow in kvm_s2_fault_map()
arch/arm64/kvm/mmu.c | 379 +++++++++++++++++++++++++------------------
1 file changed, 224 insertions(+), 155 deletions(-)
base-commit: f9985be5e1985930c2d2cf2752e36bb145b3ff7c
--
2.53.0.473.g4a7958ca14-goog
More information about the linux-arm-kernel
mailing list