[PATCH 4/6] arm64/kvm: Avoid invalid physical addresses to signal owner updates

Will Deacon will at kernel.org
Mon Nov 11 09:27:25 PST 2024


On Mon, Nov 11, 2024 at 09:35:48AM +0100, Ard Biesheuvel wrote:
> From: Ard Biesheuvel <ardb at kernel.org>
> 
> The pKVM stage2 mapping code relies on an invalid physical address to
> signal to the internal API that only the owner_id fields of descriptors
> should be updated, which are stored in the high bits of invalid
> descriptors covering memory that has been donated to protected guests,
> and is therefore unmapped from the host stage-2 page tables.
> 
> Given that these invalid PAs are never stored into the descriptors, it
> is better to rely on an explicit flag, to clarify the API and to avoid
> confusion regarding whether or not the output address of a descriptor
> can ever be invalid to begin with (which is not the case with LPA2).
> 
> That removes a dependency on the logic that reasons about the maximum PA
> range, which differs on LPA2 capable CPUs based on whether LPA2 is
> enabled or not, and will be further clarified in subsequent patches.
> 
> Signed-off-by: Ard Biesheuvel <ardb at kernel.org>
> ---
>  arch/arm64/kvm/hyp/pgtable.c | 37 ++++++++------------
>  1 file changed, 14 insertions(+), 23 deletions(-)
> 
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index b11bcebac908..4bf618b2cba7 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -35,14 +35,6 @@ static bool kvm_pgtable_walk_skip_cmo(const struct kvm_pgtable_visit_ctx *ctx)
>  	return unlikely(ctx->flags & KVM_PGTABLE_WALK_SKIP_CMO);
>  }
>  
> -static bool kvm_phys_is_valid(u64 phys)
> -{
> -	u64 parange_max = kvm_get_parange_max();
> -	u8 shift = id_aa64mmfr0_parange_to_phys_shift(parange_max);
> -
> -	return phys < BIT(shift);
> -}
> -
>  static bool kvm_block_mapping_supported(const struct kvm_pgtable_visit_ctx *ctx, u64 phys)
>  {
>  	u64 granule = kvm_granule_size(ctx->level);
> @@ -53,7 +45,7 @@ static bool kvm_block_mapping_supported(const struct kvm_pgtable_visit_ctx *ctx,
>  	if (granule > (ctx->end - ctx->addr))
>  		return false;
>  
> -	if (kvm_phys_is_valid(phys) && !IS_ALIGNED(phys, granule))
> +	if (!IS_ALIGNED(phys, granule))
>  		return false;
>  
>  	return IS_ALIGNED(ctx->addr, granule);
> @@ -587,6 +579,9 @@ struct stage2_map_data {
>  
>  	/* Force mappings to page granularity */
>  	bool				force_pte;
> +
> +	/* Walk should update owner_id only */
> +	bool				owner_update;
>  };
>  
>  u64 kvm_get_vtcr(u64 mmfr0, u64 mmfr1, u32 phys_shift)
> @@ -885,18 +880,7 @@ static u64 stage2_map_walker_phys_addr(const struct kvm_pgtable_visit_ctx *ctx,
>  {
>  	u64 phys = data->phys;
>  
> -	/*
> -	 * Stage-2 walks to update ownership data are communicated to the map
> -	 * walker using an invalid PA. Avoid offsetting an already invalid PA,
> -	 * which could overflow and make the address valid again.
> -	 */
> -	if (!kvm_phys_is_valid(phys))
> -		return phys;
> -
> -	/*
> -	 * Otherwise, work out the correct PA based on how far the walk has
> -	 * gotten.
> -	 */
> +	/* Work out the correct PA based on how far the walk has gotten */
>  	return phys + (ctx->addr - ctx->start);
>  }
>  
> @@ -908,6 +892,13 @@ static bool stage2_leaf_mapping_allowed(const struct kvm_pgtable_visit_ctx *ctx,
>  	if (data->force_pte && ctx->level < KVM_PGTABLE_LAST_LEVEL)
>  		return false;
>  
> +	/*
> +	 * Pass a value that is aligned to any block size when updating
> +	 * only the owner_id on invalid mappings.
> +	 */
> +	if (data->owner_update)
> +		phys = 0;
> +

Hmm, isn't this a change in behaviour? Previously we would always store
the owner annotations at the leaf, but now this will take place at a
higher-level. I think that probably goes horribly wrong if we later
try to change the owner for a sub-range; the block will be replaced with
a table and the old ownership information will be lost rather than
propagated to the leaves.

In other words, I think we should return false here.

Will



More information about the linux-arm-kernel mailing list