[PATCH net-next v2 2/3] net: ti: icss-iep: Enable compare events
Diogo Ivo
diogo.ivo at siemens.com
Thu Jun 6 06:28:29 PDT 2024
Hi Paolo,
On 6/6/24 11:32 AM, Paolo Abeni wrote:
> On Tue, 2024-06-04 at 14:15 +0100, Diogo Ivo wrote:
>> @@ -571,6 +573,57 @@ static int icss_iep_perout_enable(struct icss_iep *iep,
>> return ret;
>> }
>>
>> +static void icss_iep_cap_cmp_work(struct work_struct *work)
>> +{
>> + struct icss_iep *iep = container_of(work, struct icss_iep, work);
>> + const u32 *reg_offs = iep->plat_data->reg_offs;
>> + struct ptp_clock_event pevent;
>> + unsigned int val;
>> + u64 ns, ns_next;
>> +
>> + spin_lock(&iep->irq_lock);
>
> 'irq_lock' is always acquired with the irqsave variant, and here we are
> in process context. This discrepancy would at least deserve a comment;
> likely the above lock type is not correct.
If my reasoning is correct I believe this variant is correct here. The
register accesses in the IRQ handler and icss_iep_cap_cmp_work() are
orthogonal, so there should be no need to guard against the IRQ handler
here. This is the case for the other places where the _irqsave() variant
is used, so using the _irqsave() variant is overkill there.
From my understanding this is a remnant of the SDK's version of the
driver, where all of the processing now done in icss_iep_cap_cmp_work()
was done directly in the IRQ handler, meaning that we had to guard
against some other thread calling icss_iep_ptp_enable() and accessing
for example ICSS_IEP_CMP1_REG0 concurrently. This can be seen in the
comment on line 114:
struct icss_iep {
...
spinlock_t irq_lock; /* CMP IRQ vs icss_iep_ptp_enable access */
...
};
For v3 I can add a comment with a condensed version of this argument in
icss_iep_cap_cmp_work().
With this said it should be possible to change this spinlock to a mutex as
well, since all the possibilities for concurrency happen outside of
interrupt context. I can add a patch to this series doing that if you
agree with my reasoning above and find it beneficial. For this some
comments from TI would also be good to have in case I missed something
or there is some other factor that I am not aware of.
Best regards,
Diogo
More information about the linux-arm-kernel
mailing list