linux-next: stall warnings and deadlock on Arm64 (was: [PATCH] kfence: Avoid stalling...)

Marco Elver elver at google.com
Fri Nov 20 09:03:32 EST 2020


On Fri, Nov 20, 2020 at 10:30AM +0000, Mark Rutland wrote:
> On Thu, Nov 19, 2020 at 10:53:53PM +0000, Will Deacon wrote:
> > On Thu, Nov 19, 2020 at 01:35:12PM -0800, Paul E. McKenney wrote:
> > > On Thu, Nov 19, 2020 at 08:38:19PM +0100, Marco Elver wrote:
> > > > On Thu, Nov 19, 2020 at 10:48AM -0800, Paul E. McKenney wrote:
> > > > > On Thu, Nov 19, 2020 at 06:02:59PM +0100, Marco Elver wrote:
> > > 
> > > [ . . . ]
> > > 
> > > > > > I can try bisection again, or reverting some commits that might be
> > > > > > suspicious? But we'd need some selection of suspicious commits.
> > > > > 
> > > > > The report claims that one of the rcu_node ->lock fields is held
> > > > > with interrupts enabled, which would indeed be bad.  Except that all
> > > > > of the stack traces that it shows have these locks held within the
> > > > > scheduling-clock interrupt handler.  Now with the "rcu: Don't invoke
> > > > > try_invoke_on_locked_down_task() with irqs disabled" but without the
> > > > > "sched/core: Allow try_invoke_on_locked_down_task() with irqs disabled"
> > > > > commit, I understand why.  With both, I don't see how this happens.
> > > > 
> > > > I'm at a loss, but happy to keep bisecting and trying patches. I'm also
> > > > considering:
> > > > 
> > > > 	Is it the compiler? Probably not, I tried 2 versions of GCC.
> > > > 
> > > > 	Can we trust lockdep to precisely know IRQ state? I know there's
> > > > 	been some recent work around this, but hopefully we're not
> > > > 	affected here?
> > > > 
> > > > 	Is QEMU buggy?
> > > > 
> > > > > At this point, I am reduced to adding lockdep_assert_irqs_disabled()
> > > > > calls at various points in that code, as shown in the patch below.
> > > > > 
> > > > > At this point, I would guess that your first priority would be the
> > > > > initial bug rather than this following issue, but you never know, this
> > > > > might well help diagnose the initial bug.
> > > > 
> > > > I don't mind either way. I'm worried deadlocking the whole system might
> > > > be worse.
> > > 
> > > Here is another set of lockdep_assert_irqs_disabled() calls on the
> > > off-chance that they actually find something.
> > 
> > FWIW, arm64 is known broken wrt lockdep and irq tracing atm. Mark has been
> > looking at that and I think he is close to having something workable.
> > 
> > Mark -- is there anything Marco and Paul can try out?
> 
> I initially traced some issues back to commit:
> 
>   044d0d6de9f50192 ("lockdep: Only trace IRQ edges")
> 
> ... and that change of semantic could cause us to miss edges in some
> cases, but IIUC mostly where we haven't done the right thing in
> exception entry/return.
> 
> I don't think my patches address this case yet, but my WIP (currently
> just fixing user<->kernel transitions) is at:
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=arm64/irq-fixes
> 
> I'm looking into the kernel<->kernel transitions now, and I know that we
> mess up RCU management for a small window around arch_cpu_idle, but it's
> not immediately clear to me if either of those cases could cause this
> report.

Thank you -- I tried your irq-fixes, however that didn't seem to fix the
problem (still get warnings and then a panic). :-/

| [  118.375217] Testing all events: OK
| [  174.878839] Running tests again, along with the function tracer
| [  174.894781] Running tests on all trace events:
| [  174.906734] Testing all events: 
| [  176.204533] hrtimer: interrupt took 9035008 ns
| [  286.788330] BUG: workqueue lockup - pool cpus=0 node=0 flags=0x0 nice=0 stuck for 10s!
| [  286.883216] Showing busy workqueues and worker pools:
| [  286.899647] workqueue events: flags=0x0
| [  286.920606]   pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/256 refcnt=2
| [  286.933010]     pending: vmstat_shepherd
| [  644.323445] INFO: task kworker/u2:1:107 blocked for more than 12 seconds.
| [  649.448126]       Not tainted 5.10.0-rc4-next-20201119-00004-g77838ee21ff6-dirty #17
| [  656.619598] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
| [  660.623500] task:kworker/u2:1    state:R stack:    0 pid:  107 ppid:     2 flags:0x00000428
| [  671.587980] Call trace:
| [  674.885884]  __switch_to+0x148/0x1f0
| [  675.267490]  __schedule+0x2dc/0x9a8
| [  677.748050]  schedule+0x4c/0x100
| [  679.223880]  worker_thread+0xe8/0x510
| [  680.663844]  kthread+0x13c/0x188
| [  681.663992]  ret_from_fork+0x10/0x34
| [  684.493389] 
| [  684.493389] Showing all locks held in the system:
| [  688.554449] 4 locks held by swapper/0/1:
| [  691.747079] 1 lock held by khungtaskd/23:
| [  692.525727]  #0: ffffa1ebd7ff1420 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x34/0x198
| [  704.403177] 
| [  704.630928] =============================================
| [  704.630928] 
| [  706.168072] Kernel panic - not syncing: hung_task: blocked tasks
| [  706.172894] CPU: 0 PID: 23 Comm: khungtaskd Not tainted 5.10.0-rc4-next-20201119-00004-g77838ee21ff6-dirty #17
| [  706.178507] Hardware name: linux,dummy-virt (DT)
| [  706.182658] Call trace:
| [  706.186231]  dump_backtrace+0x0/0x240
| [  706.190124]  show_stack+0x34/0x88
| [  706.193917]  dump_stack+0x140/0x1bc
| [  706.197728]  panic+0x1e4/0x494
| [  706.201440]  watchdog+0x668/0xbe8
| [  706.205238]  kthread+0x13c/0x188
| [  706.208991]  ret_from_fork+0x10/0x34
| [  706.214532] Kernel Offset: 0x21ebc5a00000 from 0xffff800010000000
| [  706.219014] PHYS_OFFSET: 0xffffad8a80000000
| [  706.223148] CPU features: 0x0240022,61806082
| [  706.227149] Memory Limit: none
| [  706.233359] ---[ end Kernel panic - not syncing: hung_task: blocked tasks ]---

Thanks,
-- Marco



More information about the linux-arm-kernel mailing list