[PATCH] ARM: dts: allwinner: minimize irq debounce filter per default

Maxime Ripard maxime at cerno.tech
Mon Feb 13 01:18:03 PST 2023


On Mon, Feb 13, 2023 at 09:49:55AM +0100, pelzi at flying-snail.de wrote:
> Am 13.02.23 um 09:43 schrieb Maxime Ripard:
> > On Fri, Feb 10, 2023 at 10:18:14AM +0000, Andre Przywara wrote:
> > > > > Not sure if you were actually arguing this, but the change I sketched
> > > > > above (interpreting 0 as 24MHz/1) is separate though, as the current
> > > > > default is "no DT property", and not 0. There is no input-debounce
> > > > > property user in the kernel tree at the moment, so we wouldn't break
> > > > > anyone. The only thing that would change is if a downstream user was
> > > > > relying on "0" being interpreted as "skip the setup", which isn't
> > > > > really documented and could be argued to be an implementation detail.
> > > > > 
> > > > > So I'd suggest to implement 0 as "lowest possible", and documenting that
> > > > > and the 32KHz/1 default if no property is given.
> > > > Ah, my bad.
> > > > 
> > > > There's another thing to consider: there's already a generic per-pin
> > > > input-debounce property in pinctrl.
> > > > 
> > > > Since we can't control it per pin but per bank, we moved it to the
> > > > controller back then, but there's always been this (implicit)
> > > > expectation that it was behaving the same way.
> > > > 
> > > > And the generic, per-pin, input-debounce documentation says:
> > > > 
> > > > > Takes the debounce time in usec as argument or 0 to disable debouncing
> > > > I agree that silently ignoring it is not great, but interpreting 0 as
> > > > the lowest possible is breaking that behaviour which, I believe, is a
> > > > worse outcome.
> > > Is it really? If I understand the hardware manuals correctly, we cannot
> > > really turn that feature off, so isn't the lowest possible time period (24
> > > MHz/1 at the moment) the closest we can get to "turn it off"? So
> > > implementing this would bring us actually closer to the documented
> > > behaviour? Or did I get the meaning of this time period wrong?
> > > At least that's my understanding of how it fixed Andreas' problem: 1µs
> > > is still not "off", but much better than the 31µs of the default. The new
> > > 0 would then be 0.041µs.
> > My point was that the property we share the name (and should share the
> > semantics with) documents 0 as disabled. We would have a behavior that
> > doesn't disable it. It's inconsistent.
> > 
> > The reason doesn't really matter, we would share the same name but have
> > a completely different behavior, this is super confusing to me.
> 
> I got the point. As far as I can tell from the datasheet, it is not possible
> to actually switch off input-debounce. But as a debounce filter is actually
> a low-pass filter, setting the cut-off frequency as high as possible,
> appears to be the equivalent to switching it off.

It's not really a matter of hardware here, it's a matter of driver
behavior vs generic behavior from the framework. The hardware obviously
influences the former, but it's marginal in that discussion.

As that whole discussion shows, whether the frequency would be high
enough is application dependent, and thus we cannot just claim that it's
equivalent in all circumstances.

Making such an assumption will just bite someone else down the road,
except this time we will have users (you, I'd assume) relying on that
behavior so we wouldn't be able to address it.

But I also agree with the fact that doing nothing with 0 is bad UX and
confusing as well.

I still think that we can address both by just erroring out on 0 /
printing an error message so that it's obvious that we can't support it,
and we wouldn't change the semantics of the property either.

And then you can set the actual debouncing time you need instead of
"whatever" in the device tree.

Maxime



More information about the linux-arm-kernel mailing list