[PATCH v2] KVM: arm64: Remove size-order align in the nVHE hyp private VA range
Vincent Donnefort
vdonnefort at google.com
Mon Aug 14 00:40:01 PDT 2023
[...]
> > +int create_hyp_stack(phys_addr_t phys_addr, unsigned long *haddr)
> > +{
> > + unsigned long base;
> > + size_t size;
> > + int ret;
> > +
> > + mutex_lock(&kvm_hyp_pgd_mutex);
> > + /*
> > + * Efficient stack verification using the PAGE_SHIFT bit implies
> > + * an alignment of our allocation on the order of the size.
> > + */
> > + size = PAGE_SIZE * 2;
> > + base = ALIGN_DOWN(io_map_base - size, size);
> > +
> > + ret = __hyp_alloc_private_va_range(base);
> > +
> > + mutex_unlock(&kvm_hyp_pgd_mutex);
> > +
> > + if (ret) {
> > + kvm_err("Cannot allocate hyp stack guard page\n");
> > + return ret;
> > + }
> > +
> > + /*
> > + * Since the stack grows downwards, map the stack to the page
> > + * at the higher address and leave the lower guard page
> > + * unbacked.
> > + *
> > + * Any valid stack address now has the PAGE_SHIFT bit as 1
> > + * and addresses corresponding to the guard page have the
> > + * PAGE_SHIFT bit as 0 - this is used for overflow detection.
> > + */
> > + ret = __create_hyp_mappings(base + PAGE_SIZE, PAGE_SIZE, phys_addr,
> > + PAGE_HYP);
> > + if (ret)
> > + kvm_err("Cannot map hyp stack\n");
>
> Should we reset the io_map_base if the mapping failed here as well?
I didn't do it on purpose. Here, I'm releasing the lock and I didn't want to add
a non locked __create_hyp_mappings() just for that reset that is probably not
binging much.
>
> Otherwise lgtm, Reviewed-by: Kalesh Singh <kaleshsingh at google.com>
Thanks for the review!
>
> Thanks,
> Kalesh
>
> > +
> > + *haddr = base + size;
> > +
> > + return ret;
> > +}
> > +
> > /**
> > * create_hyp_io_mappings - Map IO into both kernel and HYP
> > * @phys_addr: The physical start address which gets mapped
> >
> > base-commit: 52a93d39b17dc7eb98b6aa3edb93943248e03b2f
> > --
> > 2.41.0.640.ga95def55d0-goog
> >
More information about the linux-arm-kernel
mailing list