[PATCH V2 4/9] scsi: lpfc: use blk_mq_max_nr_hw_queues() to calculate io vectors
Justin Tee
justintee8345 at gmail.com
Wed Jul 26 15:12:16 PDT 2023
Hi Ming,
>From version 1 of the patchset, I thought we had planned to put the
min comparison right above pci_alloc_irq_vectors instead?
diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
index 3221a934066b..20410789e8b8 100644
--- a/drivers/scsi/lpfc/lpfc_init.c
+++ b/drivers/scsi/lpfc/lpfc_init.c
@@ -13025,6 +13025,8 @@ lpfc_sli4_enable_msix(struct lpfc_hba *phba)
flags |= PCI_IRQ_AFFINITY;
}
+ vectors = min_t(unsigned int, vectors, scsi_max_nr_hw_queues());
+
rc = pci_alloc_irq_vectors(phba->pcidev, 1, vectors, flags);
if (rc < 0) {
lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
Thanks,
Justin
On Wed, Jul 26, 2023 at 2:40 AM Ming Lei <ming.lei at redhat.com> wrote:
>
> Take blk-mq's knowledge into account for calculating io queues.
>
> Fix wrong queue mapping in case of kdump kernel.
>
> On arm and ppc64, 'maxcpus=1' is passed to kdump kernel command line,
> see `Documentation/admin-guide/kdump/kdump.rst`, so num_possible_cpus()
> still returns all CPUs because 'maxcpus=1' just bring up one single
> cpu core during booting.
>
> blk-mq sees single queue in kdump kernel, and in driver's viewpoint
> there are still multiple queues, this inconsistency causes driver to apply
> wrong queue mapping for handling IO, and IO timeout is triggered.
>
> Meantime, single queue makes much less resource utilization, and reduce
> risk of kernel failure.
>
> Cc: Justin Tee <justintee8345 at gmail.com>
> Cc: James Smart <james.smart at broadcom.com>
> Signed-off-by: Ming Lei <ming.lei at redhat.com>
> ---
> drivers/scsi/lpfc/lpfc_init.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
> index 3221a934066b..c546e5275108 100644
> --- a/drivers/scsi/lpfc/lpfc_init.c
> +++ b/drivers/scsi/lpfc/lpfc_init.c
> @@ -13022,6 +13022,8 @@ lpfc_sli4_enable_msix(struct lpfc_hba *phba)
> cpu = cpumask_first(aff_mask);
> cpu_select = lpfc_next_online_cpu(aff_mask, cpu);
> } else {
> + vectors = min_t(unsigned int, vectors,
> + scsi_max_nr_hw_queues());
> flags |= PCI_IRQ_AFFINITY;
> }
>
> --
> 2.40.1
>
More information about the Linux-nvme
mailing list