[PATCH 2/3] KVM: arm64: Handle S1PTW translation with TCR_HA set as a write
Ricardo Koller
ricarkol at google.com
Wed Dec 21 08:46:06 PST 2022
Hello,
On Tue, Dec 20, 2022 at 08:09:22PM +0000, Marc Zyngier wrote:
> As a minor optimisation, we can retrofit the "S1PTW is a write
> even on translation fault" concept *if* the vcpu is using the
> HW-managed Access Flag, as setting TCR_EL1.HA is guaranteed
> to result in an update of the PTE.
>
> However, we cannot do the same thing for DB, as it would require
> us to parse the PTs to find out if the DBM bit is set there.
> This is not going to happen.
>
> Signed-off-by: Marc Zyngier <maz at kernel.org>
> ---
> arch/arm64/include/asm/kvm_emulate.h | 20 +++++++++++++++++++-
> 1 file changed, 19 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
> index fd6ad8b21f85..4ee467065042 100644
> --- a/arch/arm64/include/asm/kvm_emulate.h
> +++ b/arch/arm64/include/asm/kvm_emulate.h
> @@ -374,6 +374,9 @@ static __always_inline int kvm_vcpu_sys_get_rt(struct kvm_vcpu *vcpu)
> static inline bool kvm_is_write_fault(struct kvm_vcpu *vcpu)
> {
> if (kvm_vcpu_abt_iss1tw(vcpu)) {
> + unsigned int afdb;
> + u64 mmfr1;
> +
> /*
> * Only a permission fault on a S1PTW should be
> * considered as a write. Otherwise, page tables baked
> @@ -385,12 +388,27 @@ static inline bool kvm_is_write_fault(struct kvm_vcpu *vcpu)
> * to map the page containing the PT (read only at
> * first), then a permission fault to allow the flags
> * to be set.
> + *
> + * We can improve things if the guest uses AF, as this
> + * is guaranteed to result in a write to the PTE. For
> + * DB, however, we'd need to parse the guest's PTs,
> + * and that's not on. DB is crap anyway.
> */
> switch (kvm_vcpu_trap_get_fault_type(vcpu)) {
Nit: fault_status is calculated once when taking the fault, and passed
around to all users (like user_mem_abort()). Not sure if this is because
of the extra cycles needed to get it, or just style. Anyway, maybe it
applies here.
> case ESR_ELx_FSC_PERM:
> return true;
> default:
> - return false;
> + /* Can't introspect TCR_EL1 with pKVM */
> + if (kvm_vm_is_protected(vcpu->kvm))
> + return false;
> +
> + mmfr1 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
> + afdb = cpuid_feature_extract_unsigned_field(mmfr1, ID_AA64MMFR1_EL1_HAFDBS_SHIFT);
> +
> + if (afdb == ID_AA64MMFR1_EL1_HAFDBS_NI)
> + return false;
> +
> + return (vcpu_read_sys_reg(vcpu, TCR_EL1) & TCR_HA);
Also tested this specific case using page_fault_test when the PT page is
marked for dirty logging with and without AF. In both cases there's a
single _FSC_FAULT (no PERM_FAUT) as expected, and the PT page is marked dirty
in the AF case. The RO and UFFD cases also work as expected.
Need to send some changes for page_fault_test as many tests assume that
any S1PTW is always a PT write, and are failing. Also need to add some new
tests for PTs in RO memslots (as it didn't make much sense before this
change).
> }
> }
>
> --
> 2.34.1
>
> _______________________________________________
> kvmarm mailing list
> kvmarm at lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
Reviewed-by: Ricardo Koller <ricarkol at google.com>
Thanks,
Ricardo
More information about the linux-arm-kernel
mailing list