[RFC PATCH v2 16/26] KVM: arm64: Prepare Hyp memory protection
Quentin Perret
qperret at google.com
Mon Feb 22 06:04:16 EST 2021
Hi Sean,
On Friday 19 Feb 2021 at 10:32:58 (-0800), Sean Christopherson wrote:
> On Wed, Feb 03, 2021, Will Deacon wrote:
> > On Fri, Jan 08, 2021 at 12:15:14PM +0000, Quentin Perret wrote:
>
> ...
>
> > > +static inline unsigned long hyp_s1_pgtable_size(void)
> > > +{
>
> ...
>
> > > + res += nr_pages << PAGE_SHIFT;
> > > + }
> > > +
> > > + /* Allow 1 GiB for private mappings */
> > > + nr_pages = (1 << 30) >> PAGE_SHIFT;
> >
> > SZ_1G >> PAGE_SHIFT
>
> Where does the 1gb magic number come from?
Admittedly it is arbitrary. It needs to be enough to cover all the
so-called 'private' mappings that EL2 needs, and which can vary a little
depending on the hardware.
> IIUC, this is calculating the number
> of pages needed for the hypervisor's Stage-1 page tables.
Correct. The thing worth noting is that the hypervisor VA space is
essentially split in half. One half is reserved to map portions of
memory with a fixed offset, and the other half is used for a whole bunch
of other things: we have a vmemmap, the 'private' mappings and the idmap
page.
> The amount of memory
> needed for those page tables should be easily calculated
As mentioned above, that is true for pretty much everything in the hyp
VA space except the private mappings as that depends on e.g. the CPU
uarch and such.
> and assuming huge pages can be used, should be far less the 1gb.
Ack, though this is no supported for the EL2 mappings yet. Historically
the amount of contiguous portions of memory mapped at EL2 has been
rather small, so there wasn't really a need, but we might want to
revisit this at some point.
> > > + nr_pages = __hyp_pgtable_max_pages(nr_pages);
> > > + res += nr_pages << PAGE_SHIFT;
> > > +
> > > + return res;
>
> ...
>
> > > +void __init kvm_hyp_reserve(void)
> > > +{
> > > + u64 nr_pages, prev;
> > > +
> > > + if (!is_hyp_mode_available() || is_kernel_in_hyp_mode())
> > > + return;
> > > +
> > > + if (kvm_get_mode() != KVM_MODE_PROTECTED)
> > > + return;
> > > +
> > > + if (kvm_nvhe_sym(hyp_memblock_nr) < 0) {
> > > + kvm_err("Failed to register hyp memblocks\n");
> > > + return;
> > > + }
> > > +
> > > + sort_memblock_regions();
> > > +
> > > + /*
> > > + * We don't know the number of possible CPUs yet, so allocate for the
> > > + * worst case.
> > > + */
> > > + hyp_mem_size += NR_CPUS << PAGE_SHIFT;
>
> Is this for per-cpu stack?
Correct.
> If so, what guarantees a single page is sufficient? Mostly a curiosity question,
> since it looks like this is an existing assumption by init_hyp_mode(). Shouldn't
> the required stack size be defined in bytes and converted to pages, or is there a
> guarantee that 64kb pages will be used?
Nope, we have no such guarantees, but 4K has been more than enough for
EL2 so far. The hyp code doesn't use recursion much (I think the only
occurence we have is Will's pgtable code, and that is architecturally
limited to 4 levels of recursion for obvious reasons) and doesn't have
use stack allocations.
It's on my todo list to remap the stack pages in the 'private' range, to
surround them with guard pages so we can at least run-time check this
assumption, so stay tuned :)
> > There was a recent patch bumping NR_CPUs to 512, so this would be 32MB
> > with 64k pages. Is it possible to return memory to the host later on once
> > we have a better handle on the number of CPUs in the system?
>
> Does kvm_hyp_reserve() really need to be called during bootmem_init()? What
> prevents doing the reservation during init_hyp_mode()? If the problem is that
> pKVM needs a single contiguous chunk of memory, then it might be worth solving
> _that_ problem, e.g. letting the host donate memory in N-byte chunks instead of
> requiring a single huge blob of memory.
Right, I've been thinking about this over the weekend and that might
actually be fairly straightforward for stack pages. I'll try to move this
allocation to init_hyp_mode() where it belongs (or better, re-use the
existing one) in the nest version.
Thanks,
Quentin
More information about the linux-arm-kernel
mailing list