A possible divide by zero bug in alloc_nodes_vectors
Yiyuan guo
yguoaz at gmail.com
Fri May 14 04:31:24 PDT 2021
In kernel/irq/affinity.c, the function alloc_nodes_vectors has the
following code:
static void alloc_nodes_vectors(unsigned int numvecs,
cpumask_var_t *node_to_cpumask,
const struct cpumask *cpu_mask,
const nodemask_t nodemsk,
struct cpumask *nmsk,
struct node_vectors *node_vectors) {
unsigned n, remaining_ncpus = 0;
...
for_each_node_mask(n, nodemsk) {
...
ncpus = cpumask_weight(nmsk);
if (!ncpus)
continue;
remaining_ncpus += ncpus;
...
}
numvecs = min_t(unsigned, remaining_ncpus, numvecs);
...
for (n = 0; n < nr_node_ids; n++) {
...
WARN_ON_ONCE(numvecs == 0);
...
nvectors = max_t(unsigned, 1,
numvecs * ncpus / remaining_ncpus);
}
}
The variable remaining_ncpus may remain 0 if cpumask_weight(nmsk)
keeps returning 0 in the for loop. However, remaining_ncpus is used as
a divisor, leading to a potential divide by zero problem.
Notice that the code explicitly warns about numvecs being zero. And
since it is likely that numvecs equals to remaining_ncpus (because of
assignment: numvecs = min_t(unsigned, remaining_ncpus, numvecs)),
we should probably also check on remaining_ncpus before the division.
More information about the Linux-nvme
mailing list