[PATCH v4 2/4] arm64: cpufeature: Modify address authentication cpufeature to exact
Amit Kachhap
amit.kachhap at arm.com
Wed Aug 5 05:16:37 EDT 2020
Hi,
On 7/29/20 10:39 PM, Suzuki K Poulose wrote:
> On 07/29/2020 03:31 PM, Dave Martin wrote:
>> On Wed, Jul 29, 2020 at 01:28:06PM +0100, Suzuki K Poulose wrote:
>>> On 07/29/2020 11:36 AM, Dave Martin wrote:
>>>> On Fri, Jul 10, 2020 at 01:30:08PM +0530, Amit Daniel Kachhap wrote:
>>>>> The current address authentication cpufeature levels are set as
>>>>> LOWER_SAFE
>>>>> which is not compatible with the different configurations added for
>>>>> Armv8.3
>>>>> ptrauth enhancements as the different levels have different
>>>>> behaviour and
>>>>> there is no tunable to enable the lower safe versions. This is
>>>>> rectified
>>>>> by setting those cpufeature type as EXACT.
>>>>>
>>>>> The current cpufeature framework also does not interfere in the
>>>>> booting of
>>>>> non-exact secondary cpus but rather marks them as tainted. As a
>>>>> workaround
>>>>> this is fixed by replacing the generic match handler with a new
>>>>> handler
>>>>> specific to ptrauth.
>>>>>
>>>>> After this change, if there is any variation in ptrauth
>>>>> configurations in
>>>>> secondary cpus from boot cpu then those mismatched cpus are parked
>>>>> in an
>>>>> infinite loop.
>>>>>
>>>>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap at arm.com>
>>>>> [Suzuki: Introduce new matching function for address authentication]
>>>>> Suggested-by: Suzuki K Poulose <suzuki.poulose at arm.com>
>>>>> ---
>>>>> Changes since v3:
>>>>> * Commit logs cleanup.
>>>>>
>>>>> arch/arm64/kernel/cpufeature.c | 56
>>>>> ++++++++++++++++++++++++++++------
>>>>> 1 file changed, 47 insertions(+), 9 deletions(-)
>>>>>
>>>>> diff --git a/arch/arm64/kernel/cpufeature.c
>>>>> b/arch/arm64/kernel/cpufeature.c
>>>>> index 9fae0efc80c1..8ac8c18f70c9 100644
>>>>> --- a/arch/arm64/kernel/cpufeature.c
>>>>> +++ b/arch/arm64/kernel/cpufeature.c
>>>>> @@ -210,9 +210,9 @@ static const struct arm64_ftr_bits
>>>>> ftr_id_aa64isar1[] = {
>>>>> ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE,
>>>>> ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
>>>>> ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE,
>>>>> ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
>>>>> ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
>>>>> - FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_API_SHIFT,
>>>>> 4, 0),
>>>>> + FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
>>>>> ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
>>>>> - FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_APA_SHIFT,
>>>>> 4, 0),
>>>>> + FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),
>>>>> ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE,
>>>>> ID_AA64ISAR1_DPB_SHIFT, 4, 0),
>>>>> ARM64_FTR_END,
>>>>> };
>>>>> @@ -1605,11 +1605,49 @@ static void cpu_clear_disr(const struct
>>>>> arm64_cpu_capabilities *__unused)
>>>>> #endif /* CONFIG_ARM64_RAS_EXTN */
>>>>> #ifdef CONFIG_ARM64_PTR_AUTH
>>>>> -static bool has_address_auth(const struct arm64_cpu_capabilities
>>>>> *entry,
>>>>> - int __unused)
>>>>> +static bool has_address_auth_cpucap(const struct
>>>>> arm64_cpu_capabilities *entry, int scope)
>>>>> {
>>>>> - return __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
>>>>> - __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF);
>>>>> + int local_cpu_auth;
>>>>> + static int boot_cpu_auth_arch;
>>>>> + static int boot_cpu_auth_imp_def;
>>>>
>>>> I guess having static variables here is probably safe, provided that
>>>> calls to this function are strictly serialised with sufficiently
>>>> heavyweight synchronisation.
>>>>
>>>
>>> These are initialised with the primary boot CPU. And later on called
>>> from CPU bring up. So they are serialized, except when
>>> this_cpu_has_cap() could be called. But this is fine, as it is a read
>>> operation.
>>
>> To guarantee that any update to those variables is visible to a booting
>> CPU, we'd probably need a DSB and/or DMB somewhere. For now I don't see
>> anything explicity, though it's likely we're getting them as a side
>> effect of something else.
>>
>> In any case, I guess we've been managing for some time without this
>> being strictly correct.
>>
>
> That is a good point. In fact we need this for init_cpu_features() and
> update_cpu_features() too.
I just scanned for dsb in the boot flow. Is this dsb(ishst) called from
update_cpu_boot_status() in __cpu_up(arch/arm64/kernel/smp.c) sufficient
to sync information with the booting cpus?
>
>
>>
>>>> Might be worth a comment.
>>>>
>>>> Coming up with a cleaner approach using locks or atomics might be
>>>> overkill, but other folks might take a stronger view on this.
>>>>
>>>>> +
>>>>> + /* We don't expect to be called with SCOPE_SYSTEM */
>>>>> + WARN_ON(scope == SCOPE_SYSTEM);
>>>>> +
>>>>> + local_cpu_auth =
>>>>> cpuid_feature_extract_field(__read_sysreg_by_encoding(entry->sys_reg),
>>>>> + entry->field_pos, entry->sign);
>>>>> + /*
>>>>> + * The ptr-auth feature levels are not intercompatible with
>>>>> + * lower levels. Hence we must match all the CPUs with that
>>>>> + * of the boot CPU. So cache the level of boot CPU and compare
>>>>> + * it against the secondary CPUs.
>>>>> + */
>>>>> + if (scope & SCOPE_BOOT_CPU) {
>>>>> + if (entry->capability == ARM64_HAS_ADDRESS_AUTH_IMP_DEF) {
>>>>> + boot_cpu_auth_imp_def = local_cpu_auth;
>>
>> Actually, can we make use of read_sanitised_ftr_reg() here rather than
>> having to cache the values in static variables?
>
> Thats actually a good idea ! However the only catch is if there is a
> case where the values become compatible. e.g, APA >= 3 are all
> compatible, in which case the sanitised value would indicate 0.
> If we decide not support such cases, then we could go ahead with the
> sanitised_ftr value.
This is clean method. I will use it in my next v5 version.
Thanks,
Amit D
>
> Suzuki
More information about the linux-arm-kernel
mailing list