[PATCH 1/9] KVM: arm64: Handle huge mappings for np-guest CMOs
Vincent Donnefort
vdonnefort at google.com
Mon Mar 3 01:08:50 PST 2025
On Fri, Feb 28, 2025 at 06:54:40PM +0000, Quentin Perret wrote:
> On Friday 28 Feb 2025 at 10:25:17 (+0000), Vincent Donnefort wrote:
> > diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> > index 19c3c631708c..a796e257c41f 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> > +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> > @@ -219,14 +219,24 @@ static void guest_s2_put_page(void *addr)
> >
> > static void clean_dcache_guest_page(void *va, size_t size)
> > {
> > - __clean_dcache_guest_page(hyp_fixmap_map(__hyp_pa(va)), size);
> > - hyp_fixmap_unmap();
> > + while (size) {
>
> Nit: not a problem at the moment, but this makes me mildly worried if
> size ever became non-page-aligned, could we make the code robust to
> that?
The fixmap doesn't handle !ALIGNED adresses. (I have a patch in the tracing
series to cover that though). So wonder if that really makes sense to handle
unaligned size while it wouldn't work with unaligned va anyway?
Perhaps just a WARN_ON() then?
>
> > + __clean_dcache_guest_page(hyp_fixmap_map(__hyp_pa(va)),
> > + PAGE_SIZE);
> > + hyp_fixmap_unmap();
> > + va += PAGE_SIZE;
> > + size -= PAGE_SIZE;
> > + }
> > }
> >
> > static void invalidate_icache_guest_page(void *va, size_t size)
> > {
> > - __invalidate_icache_guest_page(hyp_fixmap_map(__hyp_pa(va)), size);
> > - hyp_fixmap_unmap();
> > + while (size) {
> > + __invalidate_icache_guest_page(hyp_fixmap_map(__hyp_pa(va)),
> > + PAGE_SIZE);
> > + hyp_fixmap_unmap();
> > + va += PAGE_SIZE;
> > + size -= PAGE_SIZE;
> > + }
> > }
> >
> > int kvm_guest_prepare_stage2(struct pkvm_hyp_vm *vm, void *pgd)
> > --
> > 2.48.1.711.g2feabab25a-goog
> >
More information about the linux-arm-kernel
mailing list