shared code: was: Re: [PATCH v3] hardlockup: detect hard lockups using secondary (buddy) CPUs
Petr Mladek
pmladek at suse.com
Tue May 2 08:26:18 PDT 2023
On Mon 2023-05-01 08:24:46, Douglas Anderson wrote:
> From: Colin Cross <ccross at android.com>
>
> Implement a hardlockup detector that doesn't doesn't need any extra
> arch-specific support code to detect lockups. Instead of using
> something arch-specific we will use the buddy system, where each CPU
> watches out for another one. Specifically, each CPU will use its
> softlockup hrtimer to check that the next CPU is processing hrtimer
> interrupts by verifying that a counter is increasing.
>
> --- a/include/linux/nmi.h
> +++ b/include/linux/nmi.h
> @@ -134,6 +144,7 @@ void lockup_detector_reconfigure(void);
> static inline void touch_nmi_watchdog(void)
> {
> arch_touch_nmi_watchdog();
> + buddy_cpu_touch_watchdog();
touch_buddy_watchdog(); ??? to follow the naming scheme?
> touch_softlockup_watchdog();
> }
>
> --- a/kernel/watchdog.c
> +++ b/kernel/watchdog.c
> @@ -106,6 +108,13 @@ void __weak watchdog_nmi_disable(unsigned int cpu)
> hardlockup_detector_perf_disable();
> }
>
> +#else
> +
> +int __weak watchdog_nmi_enable(unsigned int cpu) { return 0; }
> +void __weak watchdog_nmi_disable(unsigned int cpu) { return; }
Honestly, the mix of softlockup and hardlockup code was a hard to
follow even before this patch. And it is going to be worse.
Anyway, the buddy watchdog is not using NMI at all. It should not
get enable using a function called *_nmi_enabled().
Also some comments are not longer valid, for example:
static void watchdog_enable(unsigned int cpu)
{
[...]
/* Enable the perf event */
if (watchdog_enabled & NMI_WATCHDOG_ENABLED)
watchdog_nmi_enable(cpu);
I do not know. Maybe, fixing the mess is beyond any hope.
But we shold not make it worse.
I suggest to rename/shuffle at least functions touched
by this patchset to improve the meaning.
Sigh, it is hard to find a reasonable names. The code
already uses:
+ watchdog_*
+ watchdog_nmi_
+ softlockup_*
+ lockup_detector_*
+ hardlockup_detector_perf_*
and sysctl:
.procname = "watchdog",
.procname = "watchdog_thresh",
.procname = "nmi_watchdog",
.procname = "watchdog_cpumask",
.procname = "soft_watchdog",
.procname = "softlockup_panic",
.procname = "softlockup_all_cpu_backtrace",
.procname = "hardlockup_panic",
.procname = "hardlockup_all_cpu_backtrace",
So, I suggest, to use the names:
+ watchdog_*
+ for the common infrastructure
+ keep it in watchdog.c
+ hardlockup_detector_* or
hardlockup_watchdog_* or
watchdog_hld_*
+ for the common hardlockup stuff.
+ it t can stay in watchdog.c to keep shuffling bearable
+ hardlockup_detector_nmi_* or
hardlockup_watchdog_nmi_* or
watchdog_hld_nmi_* or
watchdog_nmi_*
+ for the arch specific hardlockup stuff that is
using NMI interrupts.
+ it might either stay in watchdog_hld.c
or be moved to watchdog_nmi.c or
watchdog_hld_nmi.c
+ hardlockup_detector_buddy_* or
hardlockup_watchdog_buddy_* or
watchdog_hld_buddy_*
watchdog_buddy_*
+ for the arch specific hardlockup stuff that is
using buddy monitoring
+ it might either be added to watchdog_hld.c
or be moved to watchdog_buddy.c or
watchdog_hld_buddy.c
Opinion:
The buddy watchdog might actually be used also for
softlockup detector. So, watchdog_buddy_* API
and watchdog_buddy.c might make sense.
> +
> +#endif /* CONFIG_HARDLOCKUP_DETECTOR */
> +
> /* Return 0, if a NMI watchdog is available. Error code otherwise */
> int __weak __init watchdog_nmi_probe(void)
> {
> @@ -364,6 +373,9 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
> /* kick the hardlockup detector */
> watchdog_interrupt_count();
>
> + /* test for hardlockups */
> + watchdog_check_hardlockup();
rename watchdog_buddy_check_hardlockup(); ???
> +
> /* kick the softlockup detector */
> if (completion_done(this_cpu_ptr(&softlockup_completion))) {
> reinit_completion(this_cpu_ptr(&softlockup_completion));
> --- /dev/null
> +++ b/kernel/watchdog_buddy_cpu.c
> @@ -0,0 +1,141 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +#include <linux/cpu.h>
> +#include <linux/cpumask.h>
> +#include <linux/kernel.h>
> +#include <linux/nmi.h>
> +#include <linux/percpu-defs.h>
> +
> +static DEFINE_PER_CPU(bool, watchdog_touch);
> +static DEFINE_PER_CPU(bool, hard_watchdog_warn);
> +static cpumask_t __read_mostly watchdog_cpus;
> +
> +static unsigned long hardlockup_allcpu_dumped;
> +
> +int __init watchdog_nmi_probe(void)
> +{
> + return 0;
> +}
This is pretty strange. It shows that it was added a hacky way.
> +
> +notrace void buddy_cpu_touch_watchdog(void)
> +{
> + /*
> + * Using __raw here because some code paths have
> + * preemption enabled. If preemption is enabled
> + * then interrupts should be enabled too, in which
> + * case we shouldn't have to worry about the watchdog
> + * going off.
> + */
> + raw_cpu_write(watchdog_touch, true);
> +}
> +EXPORT_SYMBOL_GPL(buddy_cpu_touch_watchdog);
Cut&pasted arch_touch_nmi_watchdog().
> +
> +static unsigned int watchdog_next_cpu(unsigned int cpu)
> +{
> + cpumask_t cpus = watchdog_cpus;
> + unsigned int next_cpu;
> +
> + next_cpu = cpumask_next(cpu, &cpus);
> + if (next_cpu >= nr_cpu_ids)
> + next_cpu = cpumask_first(&cpus);
> +
> + if (next_cpu == cpu)
> + return nr_cpu_ids;
> +
> + return next_cpu;
> +}
> +
[...]
> +static int is_hardlockup_buddy_cpu(unsigned int cpu)
> +{
> + unsigned long hrint = per_cpu(hrtimer_interrupts, cpu);
> +
> + if (per_cpu(hrtimer_interrupts_saved, cpu) == hrint)
> + return 1;
> +
> + per_cpu(hrtimer_interrupts_saved, cpu) = hrint;
> + return 0;
This is cut&pasted is_hardlockup(). And the __this_cpu_* API
is replaced by per_cpu_* API.
> +}
> +
> +void watchdog_check_hardlockup(void)
> +{
> + unsigned int next_cpu;
> +
> + /*
> + * Test for hardlockups every 3 samples. The sample period is
> + * watchdog_thresh * 2 / 5, so 3 samples gets us back to slightly over
> + * watchdog_thresh (over by 20%).
> + */
> + if (__this_cpu_read(hrtimer_interrupts) % 3 != 0)
> + return;
> +
> + /* check for a hardlockup on the next CPU */
> + next_cpu = watchdog_next_cpu(smp_processor_id());
> + if (next_cpu >= nr_cpu_ids)
> + return;
> +
> + /* Match with smp_wmb() in watchdog_nmi_enable() / watchdog_nmi_disable() */
> + smp_rmb();
> +
> + if (per_cpu(watchdog_touch, next_cpu) == true) {
> + per_cpu(watchdog_touch, next_cpu) = false;
> + return;
> + }
> +
> + if (is_hardlockup_buddy_cpu(next_cpu)) {
> + /* only warn once */
> + if (per_cpu(hard_watchdog_warn, next_cpu) == true)
> + return;
> +
> + /*
> + * Perform all-CPU dump only once to avoid multiple hardlockups
> + * generating interleaving traces
> + */
> + if (sysctl_hardlockup_all_cpu_backtrace &&
> + !test_and_set_bit(0, &hardlockup_allcpu_dumped))
> + trigger_allbutself_cpu_backtrace();
> +
> + if (hardlockup_panic)
> + panic("Watchdog detected hard LOCKUP on cpu %u", next_cpu);
> + else
> + WARN(1, "Watchdog detected hard LOCKUP on cpu %u", next_cpu);
> +
> + per_cpu(hard_watchdog_warn, next_cpu) = true;
> + } else {
> + per_cpu(hard_watchdog_warn, next_cpu) = false;
Also this cut&pastes a lots of code from watchdog_overflow_callback().
I wonder if we could somehow share the code between the two hardlockup
detectors. It would be win-win. It might help a lot with maintenance.
Best Regards,
Petr
More information about the linux-arm-kernel
mailing list