[PATCH 12/20] KVM: arm64: Add RESx_WHEN_E2Hx constraints as configuration flags
Fuad Tabba
tabba at google.com
Thu Jan 29 02:30:56 PST 2026
Hi Marc,
On Thu, 29 Jan 2026 at 10:14, Marc Zyngier <maz at kernel.org> wrote:
>
> Hey Fuad,
>
> On Wed, 28 Jan 2026 17:43:40 +0000,
> Fuad Tabba <tabba at google.com> wrote:
> >
> > Hi Marc,
> >
> > On Mon, 26 Jan 2026 at 12:17, Marc Zyngier <maz at kernel.org> wrote:
> > >
> > > "Thanks" to VHE, SCTLR_EL2 radically changes shape depending on the
> > > value of HCR_EL2.E2H, as a lot of the bits that didn't have much
> > > meaning with E2H=0 start impacting EL0 with E2H=1.
> > >
> > > This has a direct impact on the RESx behaviour of these bits, and
> > > we need a way to express them.
> > >
> > > For this purpose, introduce a set of 4 new constaints that, when
> > > the controlling feature is not present, force the RESx value to
> > > be either 0 or 1 depending on the value of E2H.
> > >
> > > This allows diverging RESx values depending on the value of E2H,
> > > something that is required by a bunch of SCTLR_EL2 bits.
> > >
> > > Signed-off-by: Marc Zyngier <maz at kernel.org>
> > > ---
> > > arch/arm64/kvm/config.c | 24 +++++++++++++++++++++---
> > > 1 file changed, 21 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> > > index 1990cebc77c66..7063fffc22799 100644
> > > --- a/arch/arm64/kvm/config.c
> > > +++ b/arch/arm64/kvm/config.c
> > > @@ -26,6 +26,10 @@ struct reg_bits_to_feat_map {
> > > #define MASKS_POINTER BIT(3) /* Pointer to fgt_masks struct instead of bits */
> > > #define AS_RES1 BIT(4) /* RES1 when not supported */
> > > #define REQUIRES_E2H1 BIT(5) /* Add HCR_EL2.E2H RES1 as a pre-condition */
> > > +#define RES0_WHEN_E2H0 BIT(6) /* RES0 when E2H=0 and not supported */
> > > +#define RES0_WHEN_E2H1 BIT(7) /* RES0 when E2H=1 and not supported */
> > > +#define RES1_WHEN_E2H0 BIT(8) /* RES1 when E2H=0 and not supported */
> > > +#define RES1_WHEN_E2H1 BIT(9) /* RES1 when E2H=1 and not supported */
> > >
> > > unsigned long flags;
> > >
> > > @@ -1298,10 +1302,24 @@ struct resx compute_resx_bits(struct kvm *kvm,
> > > match &= !e2h0;
> > >
> > > if (!match) {
> > > + u64 bits = reg_feat_map_bits(&map[i]);
> > > +
> > > + if (e2h0) {
> > > + if (map[i].flags & RES1_WHEN_E2H0)
> > > + resx.res1 |= bits;
> > > + else if (map[i].flags & RES0_WHEN_E2H0)
> > > + resx.res0 |= bits;
> > > + } else {
> > > + if (map[i].flags & RES1_WHEN_E2H1)
> > > + resx.res1 |= bits;
> > > + else if (map[i].flags & RES0_WHEN_E2H1)
> > > + resx.res0 |= bits;
> > > + }
> > > +
> > > if (map[i].flags & AS_RES1)
> > > - resx.res1 |= reg_feat_map_bits(&map[i]);
> > > - else
> > > - resx.res0 |= reg_feat_map_bits(&map[i]);
> > > + resx.res1 |= bits;
> > > + else if (!(resx.res1 & bits))
> > > + resx.res0 |= bits;
> >
> > The logic here feels a bit more complex than necessary, specifically
> > regarding the interaction between the E2H checks and the fallthrough
> > to AS_RES1.
> >
> > Although AS_RES1 and RES0_WHEN_E2H0 are mutually exclusive in
> > practice, the current structure technically permits a scenario where
> > both res0 and res1 get set if the flags are mixed (the e2h0 block sets
> > res0, and the AS_RES1 block falls through and sets res1). This cannot
> > be ruled out by looking at this function alone.
> >
> > It might be cleaner (and safer) to determine the res1 first, and
> > then apply the masks. Something like:
> >
> > + bool is_res1 = false;
> > +
> > + if (map[i].flags & AS_RES1)
> > + is_res1 = true;
> > + else if (e2h0)
> > + is_res1 = (map[i].flags & RES1_WHEN_E2H0);
> > + else
> > + is_res1 = (map[i].flags & RES1_WHEN_E2H1);
> > ...
>
> I think you have just put your finger on something that escaped me so
> far. You are totally right that the code as written today is ugly, and
> the trick to work out that we need to account the bits as RES0 is
> awful.
>
> But it additionally outlines something else: since RES0 is an implicit
> property (we don't specify a flag for it), RES0_WHEN_E2Hx could also
> be implicit properties. I couldn't find an example where anything
> would break. This would also avoid the combination with AS_RES1 by
> construction.
>
> >
> > This also brings up a side point: given the visual similarity of these
> > flags, it is quite easy to make a typo and accidentally combine
> > incompatible flags (e.g., AS_RES1 | RESx_WHEN_E2Hx, or RES0_WHEN_E2H0
> > | RES1_WHEN_E2H0), would it be worth adding a check to warn on
> > obviously invalid combinations?
> >
> > Or maybe even redefining AS_RES1 to be
> > (RES1_WHEN_E2H1|RES1_WHEN_E2H0), which is what it is conceptually.
> > That could simplify this code even further:
> >
> > + if (e2h0)
> > + is_res1 = (map[i].flags & RES1_WHEN_E2H0);
> > + else
> > + is_res1 = (map[i].flags & RES1_WHEN_E2H1);
>
> While that would work, I think this is a step too far. Eventually, we
> should be able to sanitise things outside of NV, and RES1 should not
> depend on E2H at all in this case.
>
> I ended up with the following hack, completely untested (needs
> renumbering, and the rest of SCTLR_EL2 repainted). Let me know what
> you think.
>
> Thanks,
>
> M.
>
> diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> index 562513a4683e2..204e5aeda4d24 100644
> --- a/arch/arm64/kvm/config.c
> +++ b/arch/arm64/kvm/config.c
> @@ -26,8 +26,6 @@ struct reg_bits_to_feat_map {
> #define MASKS_POINTER BIT(3) /* Pointer to fgt_masks struct instead of bits */
> #define AS_RES1 BIT(4) /* RES1 when not supported */
> #define REQUIRES_E2H1 BIT(5) /* Add HCR_EL2.E2H RES1 as a pre-condition */
> -#define RES0_WHEN_E2H0 BIT(6) /* RES0 when E2H=0 and not supported */
> -#define RES0_WHEN_E2H1 BIT(7) /* RES0 when E2H=1 and not supported */
> #define RES1_WHEN_E2H0 BIT(8) /* RES1 when E2H=0 and not supported */
> #define RES1_WHEN_E2H1 BIT(9) /* RES1 when E2H=1 and not supported */
>
> @@ -1375,22 +1373,15 @@ struct resx compute_resx_bits(struct kvm *kvm,
>
> if (!match) {
> u64 bits = reg_feat_map_bits(&map[i]);
> + bool res1;
>
> - if (e2h0) {
> - if (map[i].flags & RES1_WHEN_E2H0)
> - resx.res1 |= bits;
> - else if (map[i].flags & RES0_WHEN_E2H0)
> - resx.res0 |= bits;
> - } else {
> - if (map[i].flags & RES1_WHEN_E2H1)
> - resx.res1 |= bits;
> - else if (map[i].flags & RES0_WHEN_E2H1)
> - resx.res0 |= bits;
> - }
> -
> - if (map[i].flags & AS_RES1)
> + res1 = (map[i].flags & AS_RES1);
> + res1 |= e2h0 && (map[i].flags & RES1_WHEN_E2H0);
> + res1 |= !e2h0 && (map[i].flags & RES1_WHEN_E2H1);
> +
> + if (res1)
> resx.res1 |= bits;
> - else if (!(resx.res1 & bits))
> + else
> resx.res0 |= bits;
> }
> }
LGTM. Treating RES0 as the implicit default simplifies the logic and
makes invalid combinations impossible by construction, which is what
we want, as well as being easier to read.
Thanks,
/fuad
>
> --
> Without deviation from the norm, progress is not possible.
More information about the linux-arm-kernel
mailing list