[PATCH v8 10/24] coresight: etm4x: allow etm4x to be built as a module

Suzuki K Poulose suzuki.poulose at arm.com
Fri Aug 14 07:18:13 EDT 2020


On 08/14/2020 12:00 PM, Tingwei Zhang wrote:
> On Fri, Aug 14, 2020 at 06:42:01PM +0800, Suzuki K Poulose wrote:
>> On Fri, Aug 14, 2020 at 05:52:24PM +0800, Tingwei Zhang wrote:
>>> On Fri, Aug 14, 2020 at 07:46:12AM +0800, Suzuki K Poulose wrote:
>>>> On 08/13/2020 08:39 PM, Mathieu Poirier wrote:
>>>>> On Fri, Aug 07, 2020 at 07:11:39PM +0800, Tingwei Zhang wrote:
>>>>>> From: Kim Phillips <kim.phillips at arm.com>


>>>>>> @@ -1109,18 +1117,23 @@ static int etm4_starting_cpu(unsigned int
>> cpu)
>>>>>>   	if (local_read(&etmdrvdata[cpu]->mode))
>>>>>>   		etm4_enable_hw(etmdrvdata[cpu]);
>>>>>>   	spin_unlock(&etmdrvdata[cpu]->spinlock);
>>>>>> +	mutex_unlock(&per_cpu(etmdrvdata_lock, cpu));
>>>>>
>>>>> Ouch!
>>>>>
>>>>> A mutex wrapping a spinlock to protect the exact same drvdata
>> structure.
>>>>
>>>> Actually this mutex is for "protecting" etmdrvdata[cpu], not the
>>>> drvdata struct as such. But as you said, it becomes like a double
>> lock.
>>>>
>>>> Having mutex is an overkill for sure and can't be used replace
>>>> spin_lock, especially if needed from atomic context. Having looked
>>>> at the code, we only ever access/modify the etmdrvdata[cpu] on a
>>>> different CPU is when we probe and there are chances of races.
>>>> One of such is described here
>>>>
>>>>
>> http://lists.infradead.org/pipermail/linux-arm-kernel/2020-July/589941.htm
>>>> l
>>>>
>>>> I will see if I can spin a couple of patches to address these.
>>>> Once we make sure that the etmdrvdata[cpu] is only modified by "cpu"
>>>> then we don't need this mutex and stick with what we have.
>>>>
>>>> Suzuki
>>>>
>>>
>>> With Suzuki's idea, I made some change to remove mutex lock and change
>>> etmdrvdata[i] on CPU i. I didn't change the part in probe to assign
>>> drvdata to etmdrvdata. That's after all drvdata is prepared and
>>> coresight device is registered. Once hotplug/PM call back see the
>>> value, it can use it directly. If we have racing in probe and these
>>> call backs, the worse case is call backs finds out etmdrvdata is NULL
>>> and return immediately. Does this make sense?
>>>
>>>
>>> static void __exit clear_etmdrvdata(void *info)
>>> {
>>>          int cpu = *(int *)info;
>>>          etmdrvdata[cpu] = NULL;
>>> }
>>>
>>> static int __exit etm4_remove(struct amba_device *adev)
>>> {
>>>          struct etmv4_drvdata *drvdata = dev_get_drvdata(&adev->dev);
>>>
>>>          etm_perf_symlink(drvdata->csdev, false);
>>>
>>>          /*
>>>           * Taking hotplug lock here to avoid racing between etm4_remove
>> and
>>>           * CPU hotplug call backs.
>>>           */
>>>          cpus_read_lock();
>>>          if (cpu_online(drvdata->cpu))
>>>                  /*
>>>                   * The readers for etmdrvdata[] are CPU hotplug call
>> backs
>>>                   * and PM notification call backs. Change etmdrvdata[i]
>> on
>>>                   * CPU i ensures these call backs has consistent view
>>>                   * inside one call back function.
>>>                   */
>>>                  smp_call_function_single(drvdata->cpu, clear_etmdrvdata,
>> &drvdata->cpu, 1);
>>
>> You should check the error here to confirm if this was really complete.
>> Otherwise,
>> fallback to the manual clearing here.
>>
> Yes. I don't need to check cpu_online since it's checked in
> smp_call_function_single(). I can just check return value and
> fallback to manual clearing.
> 
>>>          else
>>>                  etmdrvdata[drvdata->cpu] = NULL;
>>>
>>>          cpus_read_unlock();
>>>
>>>          coresight_unregister(drvdata->csdev);
>>>
>>>          return 0;
>>> }
>>
>> Yes, this is exactly what I was looking for. Additionally we should
>> fix the races mentioned in the link above. I believe, we can solve
>> that by the following :
>>
> I already did the same thing in this patch if you ignore the mutex
> part. :)

Oh, yes. I see that now :-)

Cheers
Suzuki



More information about the linux-arm-kernel mailing list