[PATCH 1/3] nvme-pci: add and use print io-queues helper
Niklas Cassel
Niklas.Cassel at wdc.com
Wed Apr 26 09:37:43 PDT 2023
Hello Chaitanya,
On Wed, Apr 26, 2023 at 05:31:17AM -0700, Chaitanya Kulkarni wrote:
> Instaed of duplicating same code in every transport, add helper in the
s/Instaed/Instead/
in all three patches
> core to print the ctrl->io_queues, since all the transports are using
> same format to print the information we can safely replace repetative
s/repetative/repetitive/
in all three patches
> code by a centralize helper.
>
> Use that helper for nvme-pci transport.
>
> Signed-off-by: Chaitanya Kulkarni <kch at nvidia.com>
> ---
> drivers/nvme/host/core.c | 10 ++++++++++
> drivers/nvme/host/nvme.h | 1 +
> drivers/nvme/host/pci.c | 5 +----
> 3 files changed, 12 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 518c759346f0..ec430947aaf7 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -249,6 +249,16 @@ static void nvme_delete_ctrl_sync(struct nvme_ctrl *ctrl)
> nvme_put_ctrl(ctrl);
> }
>
> +void nvme_ctrl_print_io_queues(struct nvme_ctrl *ctrl, int io_queues[])
> +{
> + dev_info(ctrl->device,
> + "mapped %d/%d/%d default/read/poll queues.\n",
Here you have a full stop before the newline.
In the PCI print below, there is no full stop.
Looking at TCP and RDMA prints, they do have a full stop.
Which version do we want?
I think I prefer the version without full stop, probably because
I'm used to seeing that print, but looking at core.c it seems to
be 50/50 if a full stop is used or not at the end of a print.
Kind regards,
Niklas
> + io_queues[HCTX_TYPE_DEFAULT],
> + io_queues[HCTX_TYPE_READ],
> + io_queues[HCTX_TYPE_POLL]);
> +}
> +EXPORT_SYMBOL_GPL(nvme_ctrl_print_io_queues);
> +
> static blk_status_t nvme_error_status(u16 status)
> {
> switch (status & 0x7ff) {
> diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
> index bf46f122e9e1..4000526cbca0 100644
> --- a/drivers/nvme/host/nvme.h
> +++ b/drivers/nvme/host/nvme.h
> @@ -767,6 +767,7 @@ void nvme_unfreeze(struct nvme_ctrl *ctrl);
> void nvme_wait_freeze(struct nvme_ctrl *ctrl);
> int nvme_wait_freeze_timeout(struct nvme_ctrl *ctrl, long timeout);
> void nvme_start_freeze(struct nvme_ctrl *ctrl);
> +void nvme_ctrl_print_io_queues(struct nvme_ctrl *ctrl, int io_queues[]);
>
> static inline enum req_op nvme_req_op(struct nvme_command *cmd)
> {
> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> index 593f86323e25..771d2bf5f402 100644
> --- a/drivers/nvme/host/pci.c
> +++ b/drivers/nvme/host/pci.c
> @@ -2356,10 +2356,7 @@ static int nvme_setup_io_queues(struct nvme_dev *dev)
> nvme_suspend_io_queues(dev);
> goto retry;
> }
> - dev_info(dev->ctrl.device, "%d/%d/%d default/read/poll queues\n",
> - dev->io_queues[HCTX_TYPE_DEFAULT],
> - dev->io_queues[HCTX_TYPE_READ],
> - dev->io_queues[HCTX_TYPE_POLL]);
> + nvme_ctrl_print_io_queues(&dev->ctrl, dev->io_queues);
> return 0;
> out_unlock:
> mutex_unlock(&dev->shutdown_lock);
> --
> 2.40.0
>
>
More information about the Linux-nvme
mailing list