[PATCH v4 02/21] KVM: arm64: Add stand-alone page-table walker infrastructure

Will Deacon will at kernel.org
Thu Sep 10 08:37:13 EDT 2020


On Wed, Sep 09, 2020 at 04:29:26PM +0100, Alexandru Elisei wrote:
> On 9/7/20 4:23 PM, Will Deacon wrote:
> > [..]
> > +
> > +int kvm_pgtable_walk(struct kvm_pgtable *pgt, u64 addr, u64 size,
> > +		     struct kvm_pgtable_walker *walker)
> > +{
> > +	struct kvm_pgtable_walk_data walk_data = {
> > +		.pgt	= pgt,
> > +		.addr	= ALIGN_DOWN(addr, PAGE_SIZE),
> > +		.end	= PAGE_ALIGN(walk_data.addr + size),
> > +		.walker	= walker,
> > +	};
> 
> If the caller wants to walk [0x500, 0x1500), for PAGE_SIZE = 0x1000 (4K), the
> function walks the range [0x0, 0x1000). Is that intentional?

Yes, although the caller specifies a base and a *size*, rather than an end
address. As a concrete example, much of the hypervisor stage-1 mapping
is created using PAGE_SIZE mappings of random ELF symbols, which correspond
to arbitrary addresses. In these cases, we really do want to round-down the
address and perform a PAGE_SIZE mapping.

The alternative would be to return an error if the size is not page-aligned,
but I don't think that's really necessary, especially since we accept an
unaligned base.

Will



More information about the linux-arm-kernel mailing list