[bug report] Unable to handle kernel paging request observed during blktest nvme/047

Chaitanya Kulkarni chaitanyak at nvidia.com
Tue Jul 18 09:02:38 PDT 2023


On 7/18/2023 5:52 AM, Yi Zhang wrote:
> Hello
> 
> I connected one remote testing server and running stress blktests
> nvme/tcp nvme/047, my local PC's network suddenly disconnect,
> when I connect the remote test server again, found the server panic,
> here is the full log.
> 
> [ 2775.059752] run blktests nvme/047 at 2023-07-18 07:14:35
> [ 2775.077916] loop0: detected capacity change from 0 to 2097152
> [ 2775.086905] nvmet: adding nsid 1 to subsystem blktests-subsystem-1
> [ 2775.097228] nvmet_tcp: enabling port 0 (127.0.0.1:4420)
> [ 2775.107654] nvmet: creating nvm controller 1 for subsystem
> blktests-subsystem-1 for NQN
> nqn.2014-08.org.nvmexpress:uuid:0f01fb42-9f7f-4856-b0b3-51e60b8de349.
> [ 2775.122214] nvme nvme1: creating 128 I/O queues.
> [ 2775.131320] nvme nvme1: mapped 128/0/0 default/read/poll queues.
> [ 2775.156774] nvme nvme1: new ctrl: NQN "blktests-subsystem-1", addr
> 127.0.0.1:4420
> [ 2776.457015] nvme nvme1: Removing ctrl: NQN "blktests-subsystem-1"
> [ 2776.819885] nvmet: creating nvm controller 2 for subsystem
> blktests-subsystem-1 for NQN
> nqn.2014-08.org.nvmexpress:uuid:0f01fb42-9f7f-4856-b0b3-51e60b8de349.
> [ 2776.834469] nvme nvme1: creating 128 I/O queues.
> [ 2776.843587] nvme nvme1: mapped 128/0/0 default/read/poll queues.
> [ 2776.868188] nvme nvme1: new ctrl: NQN "blktests-subsystem-1", addr
> 127.0.0.1:4420
> [ 2777.130742] nvme nvme1: Removing ctrl: NQN "blktests-subsystem-1"
> [ 2778.008799] run blktests nvme/047 at 2023-07-18 07:14:38
> [ 2778.026652] loop0: detected capacity change from 0 to 2097152
> [ 2778.035601] nvmet: adding nsid 1 to subsystem blktests-subsystem-1
> [ 2778.046391] nvmet_tcp: enabling port 0 (127.0.0.1:4420)
> [ 2778.057017] nvmet: creating nvm controller 1 for subsystem
> blktests-subsystem-1 for NQN
> nqn.2014-08.org.nvmexpress:uuid:0f01fb42-9f7f-4856-b0b3-51e60b8de349.
> [ 2778.071593] nvme nvme1: creating 128 I/O queues.
> [ 2778.080719] nvme nvme1: mapped 128/0/0 default/read/poll queues.
> [ 2778.106150] nvme nvme1: new ctrl: NQN "blktests-subsystem-1", addr
> 127.0.0.1:4420
> [ 2779.409008] nvme nvme1: Removing ctrl: NQN "blktests-subsystem-1"
> [ 2779.749743] nvmet: creating nvm controller 2 for subsystem
> blktests-subsystem-1 for NQN
> nqn.2014-08.org.nvmexpress:uuid:0f01fb42-9f7f-4856-b0b3-51e60b8de349.
> [ 2779.764303] nvme nvme1: creating 128 I/O queues.
> [ 2779.773359] nvme nvme1: mapped 128/0/0 default/read/poll queues.
> [ 2779.797745] nvme nvme1: new ctrl: NQN "blktests-subsystem-1", addr
> 127.0.0.1:4420
> [ 2780.053958] nvme nvme1: Removing ctrl: NQN "blktests-subsystem-1"
> [ 2780.350049] ------------[ cut here ]------------
> [ 2780.354658] WARNING: CPU: 118 PID: 1874 at kernel/workqueue.c:1635
> __queue_work+0x3cc/0x460
> [ 2780.362999] Modules linked in: nvmet_tcp nvmet nvme_fabrics loop
> rfkill sunrpc vfat fat ast acpi_ipmi drm_shmem_helper ipmi_ssif
> arm_spe_pmu drm_kms_helper ipmi_devintf ipmi_msghandler arm_cmn
> arm_dmc620_pmu arm_dsu_pmu cppc_cpufreq fuse drm xfs libcrc32c
> crct10dif_ce nvme ghash_ce igb sha2_ce nvme_core sha256_arm64 sha1_ce
> sbsa_gwdt i2c_designware_platform nvme_common i2c_algo_bit
> i2c_designware_core xgene_hwmon dm_mirror dm_region_hash dm_log dm_mod
> [last unloaded: nvme_tcp]
> [ 2780.405219] CPU: 118 PID: 1874 Comm: kworker/118:1H Kdump: loaded
> Not tainted 6.5.0-rc2+ #1
> [ 2780.413557] Hardware name: GIGABYTE R152-P31-00/MP32-AR1-00, BIOS
> F31n (SCP: 2.10.20220810) 09/30/2022
> [ 2780.422848] Workqueue: kblockd blk_mq_run_work_fn
> [ 2780.427542] pstate: 204000c9 (nzCv daIF +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
> [ 2780.434490] pc : __queue_work+0x3cc/0x460
> [ 2780.438487] lr : __queue_work+0x408/0x460
> [ 2780.442484] sp : ffff800086ec3ba0
> [ 2780.445785] x29: ffff800086ec3ba0 x28: ffff07ffb8a4d6c0 x27: ffff07fff7bf1400
> [ 2780.452908] x26: ffff07ffe52e0af8 x25: ffff07ffb8a4d708 x24: 0000000000000000
> [ 2780.460030] x23: ffff800086ec3cd8 x22: ffff07ff94d70000 x21: 0000000000000000
> [ 2780.467152] x20: ffff0800814f2400 x19: ffff07ff94d70008 x18: 0000000000000000
> [ 2780.474275] x17: 0000000000000000 x16: ffffd3eb60e7a5e0 x15: 0000000000000000
> [ 2780.481397] x14: 0000000000000000 x13: 0000000000000038 x12: 0101010101010101
> [ 2780.488519] x11: 7f7f7f7f7f7f7f7f x10: fefefefefefefeff x9 : ffffd3eb60e7a588
> [ 2780.495641] x8 : fefefefefefefeff x7 : 0000000000000008 x6 : 000000010003c8e4
> [ 2780.502763] x5 : ffff07ffe52e4db0 x4 : 0000000000000000 x3 : 0000000000000000
> [ 2780.509885] x2 : 0000000000000000 x1 : 0000000004208060 x0 : ffff07ff84c67e00
> [ 2780.517008] Call trace:
> [ 2780.519441]  __queue_work+0x3cc/0x460
> [ 2780.523091]  queue_work_on+0x70/0xc0
> [ 2780.526654]  0xffffd3eb3893ff74
> [ 2780.529784]  blk_mq_dispatch_rq_list+0x148/0x578
> [ 2780.534389]  __blk_mq_sched_dispatch_requests+0xb4/0x1b8
> [ 2780.539689]  blk_mq_sched_dispatch_requests+0x40/0x80
> [ 2780.544727]  blk_mq_run_work_fn+0x44/0x98
> [ 2780.548723]  process_one_work+0x1f4/0x488
> [ 2780.552720]  worker_thread+0x74/0x420
> [ 2780.556369]  kthread+0x100/0x110
> [ 2780.559585]  ret_from_fork+0x10/0x20
> [ 2780.563148] ---[ end trace 0000000000000000 ]---
> [ 2789.825110] nvmet: ctrl 2 keep-alive timer (5 seconds) expired!
> [ 2789.831054] nvmet: ctrl 2 fatal error occurred!
> [ 2789.835585] Unable to handle kernel paging request at virtual
> address ffffd3eb3893e448
> [ 2789.843489] Mem abort info:
> [ 2789.846272]   ESR = 0x0000000086000007
> [ 2789.850007]   EC = 0x21: IABT (current EL), IL = 32 bits
> [ 2789.855309]   SET = 0, FnV = 0
> [ 2789.858350]   EA = 0, S1PTW = 0
> [ 2789.861478]   FSC = 0x07: level 3 translation fault
> [ 2789.866344] swapper pgtable: 4k pages, 48-bit VAs, pgdp=0000080457515000
> [ 2789.873032] [ffffd3eb3893e448] pgd=1000080ffffff003,
>

Can you bisect the issue ?

-ck




More information about the Linux-nvme mailing list