[PATCH v2] tests/nvme: Add admin-passthru+reset race test

Jonathan Derrick jonathan.derrick at linux.dev
Tue Nov 22 12:30:43 PST 2022



On 11/22/2022 1:26 AM, Klaus Jensen wrote:
> On Nov 21 16:04, Keith Busch wrote:
>> [cc'ing Klaus]
>>
>> On Mon, Nov 21, 2022 at 03:49:45PM -0700, Jonathan Derrick wrote:
>>> On 11/21/2022 3:34 PM, Jonathan Derrick wrote:
>>>> On 11/21/2022 1:55 PM, Keith Busch wrote:
>>>>> On Thu, Nov 17, 2022 at 02:22:10PM -0700, Jonathan Derrick wrote:
>>>>>> I seem to have isolated the error mechanism for older kernels, but 6.2.0-rc2
>>>>>> reliably segfaults my QEMU instance (something else to look into) and I don't
>>>>>> have any 'real' hardware to test this on at the moment. It looks like several
>>>>>> passthru commands are able to enqueue prior/during/after resetting/connecting.
>>>>>
>>>>> I'm not seeing any problem with the latest nvme-qemu after several dozen
>>>>> iterations of this test case. In that environment, the formats and
>>>>> resets complete practically synchronously with the call, so everything
>>>>> proceeds quickly. Is there anything special I need to change?
>>>>>  
>>>> I can still repro this with nvme-fixes tag, so I'll have to dig into it myself
>>> Here's a backtrace:
>>>
>>> Thread 1 "qemu-system-x86" received signal SIGSEGV, Segmentation fault.
>>> [Switching to Thread 0x7ffff7554400 (LWP 531154)]
>>> 0x000055555597a9d5 in nvme_ctrl (req=0x7fffec892780) at ../hw/nvme/nvme.h:539
>>> 540         return sq->ctrl;
>>> (gdb) backtrace
>>> #0  0x000055555597a9d5 in nvme_ctrl (req=0x7fffec892780) at ../hw/nvme/nvme.h:539
>>> #1  0x0000555555994360 in nvme_format_bh (opaque=0x5555579dd000) at ../hw/nvme/ctrl.c:5852
>>
>> Thanks, looks like a race between the admin queue format's bottom half,
>> and the controller reset tearing down that queue. I'll work with Klaus
>> on that qemu side (looks like a well placed qemu_bh_cancel() should do
>> it).
>>
> 
> Yuck. Bug located and quelched I think.
> 
> Jonathan, please try
> 
>   https://lore.kernel.org/qemu-devel/20221122081348.49963-2-its@irrelevant.dk/
> 
> This fixes the qemu crash, but I still see a "nvme still not live after
> 42 seconds!" resulting from the test. I'm seeing A LOT of invalid
> submission queue doorbell writes:
> 
>   pci_nvme_ub_db_wr_invalid_sq in nvme_process_db: submission queue doorbell write for nonexistent queue, sqid=0, ignoring
> 
> Tested on a 6.1-rc4.

Good change, just defers it a bit for me:

Thread 1 "qemu-system-x86" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7ffff7554400 (LWP 559269)]
0x000055555598922e in nvme_enqueue_req_completion (cq=0x0, req=0x7fffec141310) at ../hw/nvme/ctrl.c:1390
1390        assert(cq->cqid == req->sq->cqid);
(gdb) backtrace
#0  0x000055555598922e in nvme_enqueue_req_completion (cq=0x0, req=0x7fffec141310) at ../hw/nvme/ctrl.c:1390
#1  0x000055555598a7a7 in nvme_misc_cb (opaque=0x7fffec141310, ret=0) at ../hw/nvme/ctrl.c:2002
#2  0x000055555599448a in nvme_do_format (iocb=0x55555770ccd0) at ../hw/nvme/ctrl.c:5891
#3  0x00005555559942a9 in nvme_format_ns_cb (opaque=0x55555770ccd0, ret=0) at ../hw/nvme/ctrl.c:5828
#4  0x0000555555dda018 in blk_aio_complete (acb=0x7fffec1fccd0) at ../block/block-backend.c:1501
#5  0x0000555555dda2fc in blk_aio_write_entry (opaque=0x7fffec1fccd0) at ../block/block-backend.c:1568
#6  0x0000555555f506b9 in coroutine_trampoline (i0=-331119632, i1=32767) at ../util/coroutine-ucontext.c:177
#7  0x00007ffff77c84e0 in __start_context () at ../sysdeps/unix/sysv/linux/x86_64/__start_context.S:91
#8  0x00007ffff4ff2bd0 in  ()
#9  0x0000000000000000 in  ()




More information about the Linux-nvme mailing list