[PATCH 6.1.y] rxrpc: Fix call removal to use RCU safe deletion
Sasha Levin
sashal at kernel.org
Mon Apr 13 18:28:47 PDT 2026
From: David Howells <dhowells at redhat.com>
[ Upstream commit 146d4ab94cf129ee06cd467cb5c71368a6b5bad6 ]
Fix rxrpc call removal from the rxnet->calls list to use list_del_rcu()
rather than list_del_init() to prevent stuffing up reading
/proc/net/rxrpc/calls from potentially getting into an infinite loop.
This, however, means that list_empty() no longer works on an entry that's
been deleted from the list, making it harder to detect prior deletion. Fix
this by:
Firstly, make rxrpc_destroy_all_calls() only dump the first ten calls that
are unexpectedly still on the list. Limiting the number of steps means
there's no need to call cond_resched() or to remove calls from the list
here, thereby eliminating the need for rxrpc_put_call() to check for that.
rxrpc_put_call() can then be fixed to unconditionally delete the call from
the list as it is the only place that the deletion occurs.
Fixes: 2baec2c3f854 ("rxrpc: Support network namespacing")
Closes: https://sashiko.dev/#/patchset/20260319150150.4189381-1-dhowells%40redhat.com
Signed-off-by: David Howells <dhowells at redhat.com>
cc: Marc Dionne <marc.dionne at auristor.com>
cc: Jeffrey Altman <jaltman at auristor.com>
cc: Linus Torvalds <torvalds at linux-foundation.org>
cc: Simon Horman <horms at kernel.org>
cc: linux-afs at lists.infradead.org
cc: stable at kernel.org
Link: https://patch.msgid.link/20260408121252.2249051-5-dhowells@redhat.com
Signed-off-by: Jakub Kicinski <kuba at kernel.org>
[ adapted to older API ]
Signed-off-by: Sasha Levin <sashal at kernel.org>
---
net/rxrpc/call_object.c | 22 ++++++++--------------
1 file changed, 8 insertions(+), 14 deletions(-)
diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c
index 6401cdf7a6246..33165080f4685 100644
--- a/net/rxrpc/call_object.c
+++ b/net/rxrpc/call_object.c
@@ -634,11 +634,9 @@ void rxrpc_put_call(struct rxrpc_call *call, enum rxrpc_call_trace op)
_debug("call %d dead", call->debug_id);
ASSERTCMP(call->state, ==, RXRPC_CALL_COMPLETE);
- if (!list_empty(&call->link)) {
- spin_lock_bh(&rxnet->call_lock);
- list_del_init(&call->link);
- spin_unlock_bh(&rxnet->call_lock);
- }
+ spin_lock_bh(&rxnet->call_lock);
+ list_del_rcu(&call->link);
+ spin_unlock_bh(&rxnet->call_lock);
rxrpc_cleanup_call(call);
}
@@ -709,24 +707,20 @@ void rxrpc_destroy_all_calls(struct rxrpc_net *rxnet)
_enter("");
if (!list_empty(&rxnet->calls)) {
- spin_lock_bh(&rxnet->call_lock);
+ int shown = 0;
- while (!list_empty(&rxnet->calls)) {
- call = list_entry(rxnet->calls.next,
- struct rxrpc_call, link);
- _debug("Zapping call %p", call);
+ spin_lock_bh(&rxnet->call_lock);
+ list_for_each_entry(call, &rxnet->calls, link) {
rxrpc_see_call(call);
- list_del_init(&call->link);
pr_err("Call %p still in use (%d,%s,%lx,%lx)!\n",
call, refcount_read(&call->ref),
rxrpc_call_states[call->state],
call->flags, call->events);
- spin_unlock_bh(&rxnet->call_lock);
- cond_resched();
- spin_lock_bh(&rxnet->call_lock);
+ if (++shown >= 10)
+ break;
}
spin_unlock_bh(&rxnet->call_lock);
--
2.53.0
More information about the linux-afs
mailing list