bad unlock balance WARNING at nvme/045
Hannes Reinecke
hare at suse.de
Tue Oct 25 23:42:01 PDT 2022
On 10/18/22 12:57, Sagi Grimberg wrote:
>
>> Hello Hannes,
>>
>> I observed "WARNING: bad unlock balance detected!" at nvme/045 [1]. As
>> the Call
>> Trace shows, nvme_auth_reset() has unbalanced mutex lock/unlock.
>>
>> mutex_lock(&ctrl->dhchap_auth_mutex);
>> list_for_each_entry(chap, &ctrl->dhchap_auth_list, entry) {
>> mutex_unlock(&ctrl->dhchap_auth_mutex);
>> flush_work(&chap->auth_work);
>> __nvme_auth_reset(chap);
>> }
>> mutex_unlock(&ctrl->dhchap_auth_mutex);
>>
>> I tried to remove the mutex_unlock in the list iteration with a patch
>> [2], but
>> it resulted in another "WARNING: possible recursive locking detected"
>> [3]. I'm
>> not sure but cause of this WARN could be __nvme_auth_work and
>> nvme_dhchap_auth_work in same nvme_wq.
>>
>> Could you take a look for fix?
>
> I'm looking at the code and I think that the way the concurrent
> negotiations and how dhchap_auth_mutex is handled is very fragile,
> also why should the per-queue auth_work hold the controller-wide
> dhchap_auth_mutex? The only reason I see is because nvme_auth_negotiate
> is checking if the chap context is already queued? Why should we
> allow that?
>
Well; that's partially due to the internal design of linux-nvme.
The controller structure itself doesn't have 'queues' per se; there just
is a general 'ctrl' structure. So while I would have loved to have a
per-queue structure to hook the chap authentication into, all I have is
the controller structure.
Hence we have a controller-wide list holding all 'chap' structures for
the individual queues.
Hence the controller-wide mutex to gate list modifications.
> I'd suggest to splice dhchap_auth_list, to a local list and then just
> flush nvmet_wq in teardown flows. Same for renegotiations/reset flows.
> And we should prevent for the double-queuing of chap negotiations to
> begin with, instead of handling them (I still don't understand why this
> is permitted, but perhaps just return EBUSY in this case?)
We don't double queue; we're re-using the existing entries.
Can you check if this fix works?
diff --git a/drivers/nvme/host/auth.c b/drivers/nvme/host/auth.c
index c8a6db7c4498..4e824aab30eb 100644
--- a/drivers/nvme/host/auth.c
+++ b/drivers/nvme/host/auth.c
@@ -926,7 +926,6 @@ void nvme_auth_reset(struct nvme_ctrl *ctrl)
mutex_lock(&ctrl->dhchap_auth_mutex);
list_for_each_entry(chap, &ctrl->dhchap_auth_list, entry) {
- mutex_unlock(&ctrl->dhchap_auth_mutex);
flush_work(&chap->auth_work);
__nvme_auth_reset(chap);
}
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare at suse.de +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer
More information about the Linux-nvme
mailing list