[PATCH v2 14/24] KVM: arm64: Add pcpu fixmap infrastructure at EL2
Quentin Perret
qperret at google.com
Tue Jul 19 07:10:02 PDT 2022
On Tue, Jul 19, 2022 at 4:09 PM Quentin Perret <qperret at google.com> wrote:
>
> On Tue, Jul 19, 2022 at 3:30 PM Vincent Donnefort <vdonnefort at google.com> wrote:
> >
> > > static struct hyp_pool host_s2_pool;
> > > diff --git a/arch/arm64/kvm/hyp/nvhe/mm.c b/arch/arm64/kvm/hyp/nvhe/mm.c
> > > index d3a3b47181de..17d689483ec4 100644
> > > --- a/arch/arm64/kvm/hyp/nvhe/mm.c
> > > +++ b/arch/arm64/kvm/hyp/nvhe/mm.c
> > > @@ -14,6 +14,7 @@
> > > #include <nvhe/early_alloc.h>
> > > #include <nvhe/gfp.h>
> > > #include <nvhe/memory.h>
> > > +#include <nvhe/mem_protect.h>
> > > #include <nvhe/mm.h>
> > > #include <nvhe/spinlock.h>
> > >
> > > @@ -24,6 +25,7 @@ struct memblock_region hyp_memory[HYP_MEMBLOCK_REGIONS];
> > > unsigned int hyp_memblock_nr;
> > >
> > > static u64 __io_map_base;
> > > +static DEFINE_PER_CPU(void *, hyp_fixmap_base);
> > >
> > > static int __pkvm_create_mappings(unsigned long start, unsigned long size,
> > > unsigned long phys, enum kvm_pgtable_prot prot)
> > > @@ -212,6 +214,76 @@ int hyp_map_vectors(void)
> > > return 0;
> > > }
> > >
> > > +void *hyp_fixmap_map(phys_addr_t phys)
> > > +{
> > > + void *addr = *this_cpu_ptr(&hyp_fixmap_base);
> > > + int ret = kvm_pgtable_hyp_map(&pkvm_pgtable, (u64)addr, PAGE_SIZE,
> > > + phys, PAGE_HYP);
> > > + return ret ? NULL : addr;
> > > +}
> > > +
> > > +int hyp_fixmap_unmap(void)
> > > +{
> > > + void *addr = *this_cpu_ptr(&hyp_fixmap_base);
> > > + int ret = kvm_pgtable_hyp_unmap(&pkvm_pgtable, (u64)addr, PAGE_SIZE);
> > > +
> > > + return (ret != PAGE_SIZE) ? -EINVAL : 0;
> > > +}
> > > +
> >
> > I probably missed something but as the pagetable pages for this mapping are
> > pined, it seems impossible (currently) for this call to fail. Maybe a WARN_ON
> > would be more appropriate, especially the callers in the subsequent patches do
> > not seem to check for this function return value?
>
> Right, I think that wouldn't hurt. And while looking at this, I
> actually think we could get rid of these calls to the map/unmap
> functions entirely by keeping the pointers to the reserved PTEs
> directly in addition to the VA to which they correspond in the percpu
> table. That way we could manipulate the PTEs directly and avoid
> unnecessary pgtable walks. Bits [63:1] can probably remain untouched,
Well, the address bits need to change too obviously, but rest can stay.
> and {un}mapping is then only a matter of flipping bit 0 in the PTE
> (and TLBI on the unmap path). I'll have a go at it.
>
> Cheers,
> Quentin
More information about the linux-arm-kernel
mailing list