[PATCH v3 2/3] arm64: cpufeature: Modify address authentication cpufeature to exact
Dave Martin
Dave.Martin at arm.com
Tue Jun 23 10:47:49 EDT 2020
On Tue, Jun 23, 2020 at 06:47:02PM +0530, Amit Daniel Kachhap wrote:
> Hi,
>
> On 6/22/20 8:05 PM, Dave Martin wrote:
> >On Thu, Jun 18, 2020 at 10:40:28AM +0530, Amit Daniel Kachhap wrote:
> >>This patch modifies the address authentication cpufeature type to EXACT
> >>from earlier LOWER_SAFE as the different configurations added for Armv8.6
> >>enhanced PAC have different behaviour and there is no tunable to enable the
> >>lower safe versions.
> >
> >The enancements add no new instructions at EL0, right? And no
> >behavioural change, provided that userspace doesn't care what signing/
> >authentication algorithm is used?
>
> Yes you are correct that no new instructions are added.
>
> >
> >If so then I guess you're correct not to add a new hwcap, but I thought
> >it was worth asking.
> >
> >>non-exact secondary cpus but rather marks them as tainted. This patch fixes
> >>it by replacing the generic match handler with a new handler specific to
> >>ptrauth.
> >>
> >>After this change, if there is any variation in ptrauth configurations in
> >>secondary cpus from boot cpu then those mismatched cpus are parked in an
> >>infinite loop.
> >>
> >>Signed-off-by: Amit Daniel Kachhap <amit.kachhap at arm.com>
> >>[Suzuki: Introduce new matching function for address authentication]
> >>Suggested-by: Suzuki K Poulose <suzuki.poulose at arm.com>
> >
> >Does a corresponding patch need to go to stable? As things stand, I
> >think older kernels that support pointer auth will go wrong on v8.7+
> >hardware that has these features.
> >
> >This patch in its current form may be too heavy for stable, though.
> >
> >See comment below though.
> >
> >>---
> >>Change since v2:
> >> * Added new matching helper function has_address_auth_cpucap as address
> >> authentication cpufeature is now FTR_EXACT. The existing matching function
> >> has_cpuid_feature is specific to LOWER_SAFE.
> >>
> >> arch/arm64/kernel/cpufeature.c | 56 ++++++++++++++++++++++++++++------
> >> 1 file changed, 47 insertions(+), 9 deletions(-)
> >>
> >>diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> >>index 4ae41670c2e6..42670d615555 100644
> >>--- a/arch/arm64/kernel/cpufeature.c
> >>+++ b/arch/arm64/kernel/cpufeature.c
> >>@@ -210,9 +210,9 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
> >> ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
> >> ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
> >> ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
> >>- FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_API_SHIFT, 4, 0),
> >>+ FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
> >> ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
> >>- FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_APA_SHIFT, 4, 0),
> >>+ FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),
> >> ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
> >> ARM64_FTR_END,
> >> };
> >>@@ -1601,11 +1601,49 @@ static void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused)
> >> #endif /* CONFIG_ARM64_RAS_EXTN */
> >> #ifdef CONFIG_ARM64_PTR_AUTH
> >>-static bool has_address_auth(const struct arm64_cpu_capabilities *entry,
> >>- int __unused)
> >>+static bool has_address_auth_cpucap(const struct arm64_cpu_capabilities *entry, int scope)
> >> {
> >>- return __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
> >>- __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF);
> >>+ int local_cpu_auth;
> >>+ static int boot_cpu_auth_arch;
> >>+ static int boot_cpu_auth_imp_def;
> >>+
> >>+ /* We don't expect to be called with SCOPE_SYSTEM */
> >>+ WARN_ON(scope == SCOPE_SYSTEM);
> >>+
> >>+ local_cpu_auth = cpuid_feature_extract_field(__read_sysreg_by_encoding(entry->sys_reg),
> >>+ entry->field_pos, entry->sign);
> >>+ /*
> >>+ * The ptr-auth feature levels are not intercompatible with
> >>+ * lower levels. Hence we must match all the CPUs with that
> >>+ * of the boot CPU. So cache the level of boot CPU and compare
> >>+ * it against the secondary CPUs.
> >>+ */
> >
> >Thinking about this, does it actually matter if different CPUs have
> >different trapping behaviours for ptrauth?
> >
> >If we are guaranteed that the signing algorithm is still compatible
> >across all CPUs, we might get away with it.
>
> You may be right that these protections may not be required practically.
> But the algorithm of each configurations is different so theoretically
> same set of software will produce different behavior on different cpus.
>
> This code initially changed only FTR_EXACT from FTR_LOWE_SAFE. But there
> are many issues identified here [1] in the cpufeature framework so rest
> of the defensive changes added.
>
> [1]: https://www.spinics.net/lists/arm-kernel/msg808238.html
> >
> >Possibly a dumb question -- I haven't checked the specs to find out
> >whether this makes any sense or not.
>
> Your point is valid though.
OK.
Did you see my comment above about patches for stable?
[...]
Cheers
---Dave
More information about the linux-arm-kernel
mailing list