[PATCH 1/3] nvme-fabrics: add queue setup helpers
Sagi Grimberg
sagi at grimberg.me
Wed Mar 22 02:07:59 PDT 2023
>>> +unsigned int nvme_nr_io_queues(struct nvmf_ctrl_options *opts)
>>> +{
>>> + unsigned int nr_io_queues;
>>> +
>>> + nr_io_queues = min(opts->nr_io_queues, num_online_cpus());
>>> + nr_io_queues += min(opts->nr_write_queues, num_online_cpus());
>>> + nr_io_queues += min(opts->nr_poll_queues, num_online_cpus());
>>> +
>>> + return nr_io_queues;
>>> +}
>>> +EXPORT_SYMBOL_GPL(nvme_nr_io_queues);
>>
>> Given that it is shared only with tcp/rdma, maybe rename it
>> to nvmf_ip_nr_io_queues.
>
> Even if the other transports don't use it, nothing is really IP
> specific here, so I don't think that's too useful. But I'd use
> the nvmf_ prefix like other functions in this file.
OK.
>
> Just a personal choice, but I'd write this as:
>
> return min(opts->nr_io_queues, num_online_cpus()) +
> min(opts->nr_write_queues, num_online_cpus()) +
> min(opts->nr_poll_queues, num_online_cpus());
>
>>> +EXPORT_SYMBOL_GPL(nvme_set_io_queues);
>>
>> nvmf_ip_set_io_queues. Unless you think that this can be shared with
>> pci/fc as well?
>
> Again I'd drop the _ip as nothing is IP specific. FC might or might not
> eventually use it, for PCI we don't have the opts structure anyway
> (never mind this sits in fabrics.c).
OK
>
>>> +void nvme_map_queues(struct blk_mq_tag_set *set, struct nvme_ctrl *ctrl,
>>> + struct ib_device *dev, u32 io_queues[HCTX_MAX_TYPES])
>>
>> Ugh, the ib_device input that may be null is bad...
>> I think that we can kill blk_mq_rdma_map_queues altogether
>> and unify the two.
>
> Yes, I don't think anything touching an ib_device should be in
> common code.
>
More information about the Linux-nvme
mailing list