[PATCH v7 30/45] arm64: RME: Always use 4k pages for realms
Gavin Shan
gshan at redhat.com
Mon Mar 3 22:23:52 PST 2025
On 2/14/25 2:14 AM, Steven Price wrote:
> Always split up huge pages to avoid problems managing huge pages. There
> are two issues currently:
>
> 1. The uABI for the VMM allows populating memory on 4k boundaries even
> if the underlying allocator (e.g. hugetlbfs) is using a larger page
> size. Using a memfd for private allocations will push this issue onto
> the VMM as it will need to respect the granularity of the allocator.
>
> 2. The guest is able to request arbitrary ranges to be remapped as
> shared. Again with a memfd approach it will be up to the VMM to deal
> with the complexity and either overmap (need the huge mapping and add
> an additional 'overlapping' shared mapping) or reject the request as
> invalid due to the use of a huge page allocator.
>
> For now just break everything down to 4k pages in the RMM controlled
> stage 2.
>
> Signed-off-by: Steven Price <steven.price at arm.com>
> ---
> arch/arm64/kvm/mmu.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
The change log looks confusing to me. Currently, there are 3 classes of stage2 faults,
handled by their corresponding handlers like below.
stage2 fault in the private space: private_memslot_fault()
stage2 fault in the MMIO space: io_mem_abort()
stage2 fault in the shared space: user_mem_abort()
Only the stage2 fault in the private space needs to allocate a 4KB page from guest-memfd.
This patch is changing user_mem_abort(), which is all about the stage2 fault in the shared
space, where a guest-memfd isn't involved. The only intersection between the private/shared
space is the stage2 page table. I'm guessing we want to have enforced 4KB page is due to
the shared stage2 page table by the private/shared space, or I'm wrong.
What I'm understanding from the change log: it's something to be improved in future due to
only 4KB pages can be supported by guest-memfd. Please correct me if I'm wrong.
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 994e71cfb358..8c656a0ef4e9 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1641,6 +1641,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> if (logging_active || is_protected_kvm_enabled()) {
> force_pte = true;
> vma_shift = PAGE_SHIFT;
> + } else if (vcpu_is_rec(vcpu)) {
> + // Force PTE level mappings for realms
> + force_pte = true;
> + vma_shift = PAGE_SHIFT;
> } else {
> vma_shift = get_vma_page_shift(vma, hva);
> }
Thanks,
Gavin
More information about the linux-arm-kernel
mailing list