A possible divide by zero bug in alloc_nodes_vectors

Thomas Gleixner tglx at linutronix.de
Fri May 14 13:04:12 PDT 2021


On Fri, May 14 2021 at 19:31, Yiyuan guo wrote:

> In kernel/irq/affinity.c, the function alloc_nodes_vectors has the
> following code:
>
> static void alloc_nodes_vectors(unsigned int numvecs,
>                 cpumask_var_t *node_to_cpumask,
>                 const struct cpumask *cpu_mask,
>                 const nodemask_t nodemsk,
>                 struct cpumask *nmsk,
>                 struct node_vectors *node_vectors) {
>     unsigned n, remaining_ncpus = 0;
>     ...
>     for_each_node_mask(n, nodemsk) {
>         ...
>         ncpus = cpumask_weight(nmsk);
>
>         if (!ncpus)
>             continue;
>         remaining_ncpus += ncpus;
>         ...
>     }
>
>     numvecs = min_t(unsigned, remaining_ncpus, numvecs);
>     ...
>     for (n = 0; n < nr_node_ids; n++) {
>         ...
>         WARN_ON_ONCE(numvecs == 0);
>         ...
>         nvectors = max_t(unsigned, 1,
>                        numvecs * ncpus / remaining_ncpus);
>     }
> }
>
> The variable remaining_ncpus may remain 0 if cpumask_weight(nmsk)
> keeps returning 0 in the for loop. However, remaining_ncpus is used as
> a divisor, leading to a potential divide by zero problem.

How so? It's guaranteed that there is at least ONE node which is not
empty. So remaining_ncpus cannot be 0.

Thanks,

        tglx






More information about the Linux-nvme mailing list