[PATCH v2 2/3] nvme-fc: eliminate terminate_io use by nvme_fc_error_recovery

James Smart james.smart at broadcom.com
Mon Nov 23 16:44:50 EST 2020



On 11/19/2020 2:51 AM, Daniel Wagner wrote:
> Hi James,
>
> On Fri, Oct 23, 2020 at 03:27:51PM -0700, James Smart wrote:
>> nvme_fc_error_recovery() special cases handling when in CONNECTING state
>> and calls __nvme_fc_terminate_io(). __nvme_fc_terminate_io() itself
>> special cases CONNECTING state and calls the routine to abort outstanding
>> ios.
>>
>> Simplify the sequence by putting the call to abort outstanding ios directly
>> in nvme_fc_error_recovery.
>>
>> Move the location of __nvme_fc_abort_outstanding_ios(), and
>> nvme_fc_terminate_exchange() which is called by it, to avoid adding
>> function prototypes for nvme_fc_error_recovery().
> During local testing I run into this problem:
>
>   BUG: scheduling while atomic: swapper/37/0/0x00000100
>   Modules linked in: iscsi_ibft(E) iscsi_boot_sysfs(E) rfkill(E) intel_rapl_msr(E) intel_rapl_common(E) sb_edac(E) x86_pkg_temp_thermal(E) intel_powerclamp(E) ext4(E) nls_iso8859_1(E) coretemp(E) nls_cp437(E) crc16(E) kvm_intel(E) mbcache(E) jbd2(E) kvm(E) vfat(E) irqbypass(E) crc32_pclmul(E) fat(E) ghash_clmulni_intel(E) iTCO_wdt(E) lpfc(E) iTCO_vendor_support(E) aesni_intel(E) nvmet_fc(E) aes_x86_64(E) ipmi_ssif(E) crypto_simd(E) nvmet(E) bnx2x(E) cryptd(E) glue_helper(E) pcspkr(E) lpc_ich(E) ipmi_si(E) tg3(E) mdio(E) ioatdma(E) hpilo(E) mfd_core(E) hpwdt(E) ipmi_devintf(E) configfs(E) libphy(E) dca(E) ipmi_msghandler(E) button(E) btrfs(E) libcrc32c(E) xor(E) raid6_pq(E) mgag200(E) drm_vram_helper(E) sd_mod(E) ttm(E) i2c_algo_bit(E) qla2xxx(E) drm_kms_helper(E) syscopyarea(E) nvme_fc(E) sysfillrect(E) sysimgblt(E) nvme_fabrics(E) uhci_hcd(E) fb_sys_fops(E) ehci_pci(E) ehci_hcd(E) nvme_core(E) crc32c_intel(E) scsi_transport_fc(E) drm(E) usbcore(E) hpsa(E) scsi_transport_sas(E)
>    wmi(E) sg(E) dm_multipath(E) dm_mod(E) scsi_dh_rdac(E) scsi_dh_emc(E) scsi_dh_alua(E) scsi_mod(E) efivarfs(E)
>   Supported: No, Unreleased kernel
>   CPU: 37 PID: 0 Comm: swapper/37 Tainted: G            EL      5.3.18-0.g7362c5c-default #1 SLE15-SP2 (unreleased)
>   Hardware name: HP ProLiant DL580 Gen9/ProLiant DL580 Gen9, BIOS U17 10/21/2019
>   Call Trace:
>    <IRQ>
>    dump_stack+0x66/0x8b
>    __schedule_bug+0x51/0x70
>    __schedule+0x697/0x750
>    schedule+0x2f/0xa0
>    schedule_timeout+0x1dd/0x300
>    ? lpfc_sli4_fp_handle_fcp_wcqe.isra.31+0x146/0x390 [lpfc]
>    ? update_group_capacity+0x25/0x1b0
>    wait_for_completion+0xba/0x140
>    ? wake_up_q+0xa0/0xa0
>    __wait_rcu_gp+0x110/0x130
>    synchronize_rcu+0x55/0x80
>    ? __call_rcu+0x4e0/0x4e0
>    ? __bpf_trace_rcu_invoke_callback+0x10/0x10
>    __nvme_fc_abort_outstanding_ios+0x5f/0x90 [nvme_fc]
>    nvme_fc_error_recovery+0x25/0x70 [nvme_fc]
>    nvme_fc_fcpio_done+0x243/0x400 [nvme_fc]
>    lpfc_sli4_nvme_xri_aborted+0x62/0x100 [lpfc]
>    lpfc_sli4_sp_handle_abort_xri_wcqe.isra.56+0x4c/0x170 [lpfc]
>    ? lpfc_sli4_fp_handle_cqe+0x8b/0x490 [lpfc]
>    lpfc_sli4_fp_handle_cqe+0x8b/0x490 [lpfc]
>    __lpfc_sli4_process_cq+0xfd/0x270 [lpfc]
>    ? lpfc_sli4_sp_handle_abort_xri_wcqe.isra.56+0x170/0x170 [lpfc]
>    __lpfc_sli4_hba_process_cq+0x3c/0x110 [lpfc]
>    lpfc_cq_poll_hdler+0x16/0x20 [lpfc]
>    irq_poll_softirq+0x88/0x110
>    __do_softirq+0xe3/0x2dc
>    irq_exit+0xd5/0xe0
>    do_IRQ+0x7f/0xd0
>    common_interrupt+0xf/0xf
>    </IRQ>
>
>
> I think we can't move the __nvme_fc_abort_outstanding_ios() into this
> path as we are still running in IRQ context.
>
> Thanks,
> Daniel
>

Daniel,

I agree with you. This was brought about by lpfc converting to use to 
blk_irq poll.   I'll put something together for the transport, as it is 
likely reasonable to expect use of blk_irq

-- james


-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 4163 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://lists.infradead.org/pipermail/linux-nvme/attachments/20201123/d6221ac5/attachment-0001.p7s>


More information about the Linux-nvme mailing list