[PATCH v7 01/10] lib/group_cpus: Add group_masks_cpus_evenly()
Daniel Wagner
dwagner at suse.de
Wed Sep 3 05:42:10 PDT 2025
On Fri, Jul 11, 2025 at 09:28:12AM +0100, John Garry wrote:
> > +/**
> > + * group_mask_cpus_evenly - Group all CPUs evenly per NUMA/CPU locality
> > + * @numgrps: number of groups
>
> this comment could be a bit more useful
>
> > + * @cpu_mask: CPU to consider for the grouping
>
> this is a CPU mask, and not a specific CPU index, right?
Yes, I've updated the documentation to:
/**
* group_mask_cpus_evenly - Group all CPUs evenly per NUMA/CPU locality
* @numgrps: number of cpumasks to create
* @mask: CPUs to consider for the grouping
* @nummasks: number of initialized cpusmasks
*
* Return: cpumask array if successful, NULL otherwise. Only the CPUs
* marked in the mask will be considered for the grouping. And each
* element includes CPUs assigned to this group. nummasks contains the
* number of initialized masks which can be less than numgrps. cpu_mask
*
* Try to put close CPUs from viewpoint of CPU and NUMA locality into
* same group, and run two-stage grouping:
* 1) allocate present CPUs on these groups evenly first
* 2) allocate other possible CPUs on these groups evenly
*
* We guarantee in the resulted grouping that all CPUs are covered, and
* no same CPU is assigned to multiple groups
*/
struct cpumask *group_mask_cpus_evenly(unsigned int numgrps,
const struct cpumask *mask,
unsigned int *nummasks)
> > + ret = __group_cpus_evenly(0, numgrps, node_to_cpumask, cpu_mask, nmsk,
> > + masks);
>
> maybe it's personal taste, but I don't think that it's a good style to
> always pass through 'fail' labels, even if we have not failed in some
> step
I'd rather leave it as it is, because it matches the existing code in
group_cpus_evenly. And there is also alloc_node_to_cpumask which does
the same. Consistency wins IMO.
More information about the Linux-nvme
mailing list