[PATCH v4 00/61] arm64: Add support for LPA2 at stage1 and WXN

Marc Zyngier maz at kernel.org
Mon Oct 23 11:54:10 PDT 2023


On Mon, 23 Oct 2023 18:45:25 +0100,
Catalin Marinas <catalin.marinas at arm.com> wrote:
> 
> On Tue, 12 Sep 2023 14:15:50 +0000, Ard Biesheuvel wrote:
> > This is a followup to [0], which was sent out more than 6 months ago.
> > Thanks to Ryan and Mark for feedback and review. This series is
> > independent from Ryan's work on adding support for LPA2 to KVM - the
> > only potential source of conflict should be the patch "arm64: kvm: Limit
> > HYP VA and host S2 range to 48 bits when LPA2 is in effect", which could
> > simply be dropped in favour of the KVM changes to make it support LPA2.
> > 
> > [...]
> 
> I pushed the series to the arm64 for-next/lpa2-stage1 branch. If
> something falls apart badly in -next (other than the typical conflicts),
> I can drop the series before the upcoming merging window.
> 
> There are a couple of patches touching KVM, it would be good to get an
> ack from Marc or Oliver (I'll rebase the branch if you do but no worries
> if you don't get around). I think Ard's C++ style comments will
> disappear with Ryan's LPA2 support for stage 2 (whenever that will get
> merged).
> 
> https://lore.kernel.org/r/20230912141549.278777-119-ardb@google.com
> https://lore.kernel.org/r/20230912141549.278777-120-ardb@google.com
> 
> Talking of KVM, we'll get a conflict with next (depending on which
> branch is picked first by sfr, the polarity may differ). That's my
> resolution of merging Ard's patches into -next:
> 
> diff --cc arch/arm64/kvm/hyp/nvhe/mem_protect.c
> index 8d0a5834e883,c20b08cf1f03..34c17ec521c7
> --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> @@@ -128,9 -128,11 +128,11 @@@ static void prepare_host_vtcr(void
>   	/* The host stage 2 is id-mapped, so use parange for T0SZ */
>   	parange = kvm_get_parange(id_aa64mmfr0_el1_sys_val);
>   	phys_shift = id_aa64mmfr0_parange_to_phys_shift(parange);
> + 	if (IS_ENABLED(CONFIG_ARM64_LPA2) && phys_shift > 48)
> + 		phys_shift = 48; // not implemented yet
>   
>  -	host_mmu.arch.vtcr = kvm_get_vtcr(id_aa64mmfr0_el1_sys_val,
>  -					  id_aa64mmfr1_el1_sys_val, phys_shift);
>  +	host_mmu.arch.mmu.vtcr = kvm_get_vtcr(id_aa64mmfr0_el1_sys_val,
>  +					      id_aa64mmfr1_el1_sys_val, phys_shift);
>   }
>   
>   static bool host_stage2_force_pte_cb(u64 addr, u64 end, enum kvm_pgtable_prot prot);
> 
> So Marc, Oliver, if you want to avoid this, you could merge the
> lpa2-stage1 branch into the KVM tree once I freeze it.

Yeah, that's probably best (though this looks pretty minor). I'll let
Oliver decide on it, as he's in charge this time around.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.



More information about the linux-arm-kernel mailing list