[PATCH v3 3/6] KVM: arm64: add emulation for CTR_EL0 register

Eric Auger eauger at redhat.com
Thu May 30 10:20:52 PDT 2024



On 5/30/24 14:56, Sebastian Ott wrote:
> Hej Eric,
> 
> On Wed, 29 May 2024, Eric Auger wrote:
>> On 5/14/24 09:22, Sebastian Ott wrote:
>>> +static int validate_clidr_el1(u64 clidr_el1, u64 ctr_el0)
>>> +{
>>> +    u64 idc = !CLIDR_LOC(clidr_el1) ||
>>> +          (!CLIDR_LOUIS(clidr_el1) && !CLIDR_LOUU(clidr_el1));
>> This actually computes:
>> CLIDR_EL1.LoC == 0b000 or (CLIDR_EL1.LoUIS == 0b000 &&
>> CLIDR_EL1.LoUU == 0b000)
>>
>> refering to ARM ARM
>> Terminology for Clean, Invalidate, and Clean and Invalidate instructions
>>
>> 1) If the LoC field value is 0x0, this means that no levels of cache
>> need to cleaned or invalidated
>> when cleaning or invalidating to the Point of Coherency.
>>
>> 2) If the LoUU field value is 0x0, this means that no levels of data
>> cache need to be cleaned or
>> invalidated when cleaning or invalidating to the Point of Unification.
>>
>> 3) If the LoUIS field value is 0x0, this means that no levels of data or
>> unified cache need to
>> cleaned or invalidated when cleaning or invalidating to the Point of
>> Unification for the Inner Shareable shareability domain.
>>
>> so to me if above computation is true this means who have no level of
>> cache to take care of, so although CTR_EL0.IDC = 0 would normally mean
>> you must "Data cache clean to the Point of Unification" that is not
>> needed in that case.
>>
>> But the spec does not really state that IDC=0 and
>> no_level_of_cache_to_clean_inv are incompatible as far as I see
> 
> This is just existing code moved to a helper..
agreed this comment/question is not related to your patch

> 
>>> +    if ((clidr_el1 & CLIDR_EL1_RES0) || (!(ctr_el0 & CTR_EL0_IDC) &&
>>> idc))> +        return -EINVAL;
>>
>> Isn't (clidr_el1 & CLIDR_EL1_RES0) already checked by
>>
>>        { SYS_DESC(SYS_CLIDR_EL1), access_clidr, reset_clidr, CLIDR_EL1,
>>          .set_user = set_clidr, .val = ~CLIDR_EL1_RES0 },
>>
> 
> Nope, that would only be the case when arm64_check_features()
> is used (by having set_id_reg() for the .set_user callback).
OK
> 
>>> +static int validate_cache_top(struct kvm_vcpu *vcpu, u64 ctr_el0)
>> s/top/topology?
> 
> Hm, that name is already quiet long.
yes but top does not mean much
> 
>>> +{
>>> +    const struct sys_reg_desc *clidr_el1;
>>> +    unsigned int i;
>>> +    int ret;
>>> +
>>> +    clidr_el1 = get_sys_reg_desc(SYS_CLIDR_EL1);
>>> +    if (!clidr_el1)
>>> +        return -ENOENT;
>>> +
>>> +    ret = validate_clidr_el1(__vcpu_sys_reg(vcpu, clidr_el1->reg),
>>> ctr_el0);
>>> +    if (ret)
>>> +        return ret;
>>> +
>>> +    if (!vcpu->arch.ccsidr)
>>> +        return 0;
>>> +
>> worth to add a comment about what this does as this is not
>> straighforward ;-)
> 
> Hm, "check for validity of the cache topology" - that's kinda the
> functions name, so no added value. "Make sure the cache line size
> per level obeys the minimum cache line setting" - would this help?
> Can't think of smth else right now, sry. Suggestions?
yes the latter is fine to me
> 
>>> +    for (i = 0; i < CSSELR_MAX; i++) {
>>> +        if ((FIELD_GET(CCSIDR_EL1_LineSize, get_ccsidr(vcpu, i)) + 4)
>> maybe use a local variable such as log2_cache_bytes
>>> +            < __get_min_cache_line_size(ctr_el0, i & CSSELR_EL1_InD))
>> I don't get i & CSSELR_EL1_InD, please can you explain?
> 
> It flags the cache at this level as a data or instruction cache (see also
> get_ccsidr()).
OK I understand the principle now. thank you
> 
>>> +static int set_ctr(struct kvm_vcpu *vcpu, const struct sys_reg_desc
>>> *rd,
>>> +           u64 val)
>>> +{
>> don't you need to take the config_lock earlier as in set_id_reg()? isn't
>> it racy versus has_ran_once?
> 
> I was about to write that this is not the case since that's an rcu
> accessed pointer not guarded by the config lock but I confused this
> with the vcpu_has_run_once() .... again :-(
> 
> I'm not a 100% sure we really need that but I'll just move the lock up -
> it definitely doesn't hurt.
yup

Eric
> 
>>> +    u64 ctr = vcpu->kvm->arch.ctr_el0;
>>> +    u64 writable_mask = rd->val;
>>> +    int ret;
>>> +
>>> +    if (val == ctr)
>>> +        return 0;
>>> +
>>> +    if (kvm_vm_has_ran_once(vcpu->kvm))> +        return -EBUSY;> +
>>> +    if ((ctr & ~writable_mask) != (val & ~writable_mask))
>>> +        return -EINVAL;
>>> +
>>> +    if (((ctr & CTR_EL0_DIC_MASK) < (val & CTR_EL0_DIC_MASK)) ||
>>> +        ((ctr & CTR_EL0_IDC_MASK) < (val & CTR_EL0_IDC_MASK)) ||
>>> +        ((ctr & CTR_EL0_DminLine_MASK) < (val &
>>> CTR_EL0_DminLine_MASK)) ||
>>> +        ((ctr & CTR_EL0_IminLine_MASK) < (val &
>>> CTR_EL0_IminLine_MASK))) {
>>> +        return -EINVAL;
>>> +    }
>>> +
>>> +    mutex_lock(&vcpu->kvm->arch.config_lock);
>>> +    ret = validate_cache_top(vcpu, val);
>>> +    if (ret) {
>>> +        mutex_unlock(&vcpu->kvm->arch.config_lock);
>>> +        return ret;
>> nit use a goto out
> 
> Thanks,
> Sebastian
> 




More information about the linux-arm-kernel mailing list