[PATCH v6 02/25] KVM: arm64: Save ID registers' sanitized value per guest
Reiji Watanabe
reijiw at google.com
Fri Mar 25 19:35:39 PDT 2022
Hi Oliver,
> > > > + */
> > > > +#define KVM_ARM_ID_REG_MAX_NUM 64
> > > > +#define IDREG_IDX(id) ((sys_reg_CRm(id) << 3) | sys_reg_Op2(id))
> > > > +
> > > > struct kvm_arch {
> > > > struct kvm_s2_mmu mmu;
> > > > @@ -137,6 +144,9 @@ struct kvm_arch {
> > > > /* Memory Tagging Extension enabled for the guest */
> > > > bool mte_enabled;
> > > > bool ran_once;
> > > > +
> > > > + /* ID registers for the guest. */
> > > > + u64 id_regs[KVM_ARM_ID_REG_MAX_NUM];
> > >
> > > This is a decently large array. Should we embed it in kvm_arch or
> > > allocate at init?
> >
> >
> > What is the reason why you think you might want to allocate it at init ?
>
> Well, its a 512 byte array of mostly cold data. We're probably
> convinced that the guest is going to access these registers at most once
> per vCPU at boot.
>
> For the vCPU context at least, we only allocate space for registers we
> actually care about (enum vcpu_sysreg). My impression of the feature
> register ranges is that there are a lot of registers which are RAZ, so I
> don't believe we need to make room for uninteresting values.
>
> Additionally, struct kvm is visible to EL2 if running nVHE. I
> don't believe hyp will ever need to look at these register values.
As saving/restoring breakpoint/watchpoint registers for guests
might need a special handling when AA64DFR0_EL1.BRPs get changed,
next version of the series will use the data in the array at
EL2 nVHE. They are cold data, and almost half of the entries
will be RAZ at the moment though:)
Thanks,
Reiji
More information about the linux-arm-kernel
mailing list