[PATCH 3/3] Revert "lib/group_cpus.c: avoid acquiring cpu hotplug lock in group_cpus_evenly"
Daniel Wagner
dwagner at suse.de
Mon Mar 2 06:27:24 PST 2026
On Mon, Mar 02, 2026 at 10:12:49PM +0800, Ming Lei wrote:
> > Sure, I would like to add the lock back to group_cpus_evenly so it's
> > possible to add support for the isolcpu use case. For the isolcpus case,
> > it's necessary to access the cpu_online_mask when creating a
> > housekeeping cpu mask. I failed to find a good solution which doesn't
> > introduce horrible hacks (see Thomas' feedback on this [1]).
> >
> > Anyway, I am not totally set on this solution, but I having a proper
> > lock in this code path would make the isolcpu extension way cleaner I
> > think.
>
> Then please include this patch with an explanation in your isolcpus
> patch set.
I didn't add it in the commit message because the code is not there yet,
thus only mentioned it the cover letter. But sure, I'll add this info.
> > What do you exactly mean with 'API hard to use'? The problem that the
> > caller/driver has to make sure it doesn't do anything like the nvme-pci
> > driver?
>
> This API is usually called in slow path, in which subsystem locks are often
> required, then lock dependency against cpus_read_lock is added.
Yes, that's very reason I came up with this handshake protocol, which
only covers the block layer subsystem. I wonder if it would be possible
to do a lock free version with a retry check at the end. When a cpu
hotplug event happended during the calculation, start over. For this
some sort of generation count for cpu hotplug events would be handy. Just
think loud.
More information about the Linux-nvme
mailing list