[PATCH v3 1/9] RDMA/core: Add implicit per-device completion queue pools

Bart Van Assche Bart.VanAssche at wdc.com
Thu Nov 9 09:33:35 PST 2017


On Thu, 2017-11-09 at 19:31 +0200, Sagi Grimberg wrote:
> > > +static int ib_alloc_cqs(struct ib_device *dev, int nr_cqes,
> > > +        enum ib_poll_context poll_ctx)
> > > +{
> > > +    LIST_HEAD(tmp_list);
> > > +    struct ib_cq *cq;
> > > +    unsigned long flags;
> > > +    int nr_cqs, ret, i;
> > > +
> > > +    /*
> > > +     * Allocated at least as many CQEs as requested, and otherwise
> > > +     * a reasonable batch size so that we can share CQs between
> > > +     * multiple users instead of allocating a larger number of CQs.
> > > +     */
> > > +    nr_cqes = max(nr_cqes, min(dev->attrs.max_cqe, IB_CQE_BATCH));
> > 
> > did you mean min() ?
> 
> No, I meant max. If we choose the CQ size, we choose the min between the
> default and the device capability, if the user chooses, we rely that it
> asked for no more than the device capability (and if not, allocation
> will fail, as it should).

Hello Sagi,

How about the following:

	min(dev->attrs.max_cqe, max(nr_cqes, IB_CQE_BATCH))

Bart.


More information about the Linux-nvme mailing list