[PATCH] pinctrl: sunxi: Use minimal debouncing period as default

Maxime Ripard mripard at kernel.org
Mon Dec 2 03:03:40 PST 2024


On Sat, Nov 30, 2024 at 11:34:08AM +0100, Paul Kocialkowski wrote:
> Hi Maxime,
> 
> Le Fri 29 Nov 24, 16:37, Maxime Ripard a écrit :
> > On Wed, Nov 20, 2024 at 11:05:42AM +0100, Paul Kocialkowski wrote:
> > > Le Wed 20 Nov 24, 09:01, Maxime Ripard a écrit :
> > > > > > If anything, the status quo doesn't impose anything, it just rolls with
> > > > > > the hardware default. Yours would impose one though.
> > > > > 
> > > > > The result is that it puts a strong limitation and breaks many use cases by
> > > > > default. I don't think we have to accept whatever register default was chosen
> > > > > by hardware engineers as the most sensible default choice and pretend that this
> > > > > is not a policy decision.
> > > > 
> > > > You're making it much worse than it is. It doesn't "break many use
> > > > cases" it broke one, by default, with a supported way to unbreak it, in
> > > > 12 years.
> > > 
> > > I think this is exaggerated. Like I mentioned previously there are *many*
> > > situations that are not covered by the default.
> > 
> > Note that this statement would be true for any default. The current, the
> > one you suggest, or any other really. The fact that we have a way to
> > override it is an acknowledgement that it's not a one size fits all
> > situation.
> 
> Again the debate is about which option has the best advantages over
> disadvantages. I'm not saying the default I suggest has no issues, but
> rather that the benefits very clearly outweight the issues. Hence the many
> situations that would be supported with the shortest debouncing period (both
> short and frequent interrupts) versus the few broken-hardware use-cases
> (related to interrupt storms) support with the largest period.
> 
> > > The fact that I'm the first person to bring it up in 12 years doesn't
> > > change that.
> > 
> > Sure. It does however hint that it seems like it's a sane enough
> > default.
> 
> Or maybe people didn't realize this mechanism existed, failed to understand
> why their device didn't work with Allwinner platforms and just moved on to
> use something else. Indeed it's all very subjective interpretation.
> 
> > > Sofar the downside you brought up boils down to: badly-designed
> > > hardware may have relied on this mechanism to avoid interrupt storms
> > > that could prevent the system from booting.
> > 
> > It's not about good or bad design. Buttons bounce, HPD signals bounce,
> > it's just the world we live in.
> 
> Well I'm an electrical engineer and the first thing we were told about
> buttons and connectors is to include hardware debouncing. The second thing
> is that it can be done in software (which again is done in a number of drivers)
> by just disabling the interrupt for a while if it happens too often.
> 
> So I'm quite affirmative that taking none of these into account is constitutive
> of a broken hardware design. No electrical engineer is told that they shouldn't
> care about this because the SoC will filter interrupts for them.

The SoC provides the hardware debouncing. There's no reason not to use
it, or to choose something redundant. Some might, but it's also
perfectly valid to just rely on the SoC there.

> Of course it's fine to use this mechanism when it exists, but it's not a
> reasonable expectation to just assume it will always be there. This is why
> I think it's not a legitimate reason to make it a default.

Nobody ever designed a board without considering the SoC features but
rather by adhering to a dogma. The SoC features, components chosen and
their price, etc. all play a role.

> > But let me rephrase if my main objection wasn't clear enough: you want
> > to introduce an ABI breaking change. With the possibility of breaking
> > devices that have worked fine so far. That's not ok.
> 
> I believe it is highly questionable that this constitutes ABI breakage.
> To me there was no defined behavior in the first place, since the debouncing
> configuration is inherited either from the reset value or the boot stage.
> There is also no formal indication of what the default is, anywhere.

Depending on the interpretation, it either means that you change the
default, or add a default, to a device-tree property. That constitutes
an ABI breakage on its own right. And then we can introduce regressions
for boards, which is another breakage.

> Changing the default configuration of hardware is commonplace. One might
> for example introduce a reset to a driver to get a clean state because it
> happened that some boot software would mess it up and it went unnoticed for
> a while. Would you also call that ABI breakage?

No, because it doesn't require a change in the default state Linux
expects when it boots, or changing anything in the device tree. It's a
self-contained change, and thus there's no interface to break.

> I think there's a number of situations where it's much more sensible to change
> a default state to avoid very visible and practical issues. And it does happen.
> 
> Also my understanding of the "ABI breakage" rule in the kernel is that no
> userspace program should stop working when it was implemented to (correctly)
> follow some kernel-related ABI. It doesn't mean that we cannot change any
> default state

If applications rely on that default one way or another, it's absolutely
what it means. The only criteria is whether this will create a
regression for any application.

Maxime
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 273 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20241202/c3254323/attachment.sig>


More information about the linux-arm-kernel mailing list