[PATCH v7 10/22] sched: Reject CPU affinity changes based on task_cpu_possible_mask()

Peter Zijlstra peterz at infradead.org
Wed May 26 10:56:51 PDT 2021


On Wed, May 26, 2021 at 05:12:49PM +0100, Will Deacon wrote:
> On Wed, May 26, 2021 at 05:15:51PM +0200, Peter Zijlstra wrote:
> > On Tue, May 25, 2021 at 04:14:20PM +0100, Will Deacon wrote:
> > > Reject explicit requests to change the affinity mask of a task via
> > > set_cpus_allowed_ptr() if the requested mask is not a subset of the
> > > mask returned by task_cpu_possible_mask(). This ensures that the
> > > 'cpus_mask' for a given task cannot contain CPUs which are incapable of
> > > executing it, except in cases where the affinity is forced.
> > > 
> > > Reviewed-by: Quentin Perret <qperret at google.com>
> > > Signed-off-by: Will Deacon <will at kernel.org>
> > > ---
> > >  kernel/sched/core.c | 4 ++++
> > >  1 file changed, 4 insertions(+)
> > > 
> > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > > index 00ed51528c70..8ca7854747f1 100644
> > > --- a/kernel/sched/core.c
> > > +++ b/kernel/sched/core.c
> > > @@ -2346,6 +2346,7 @@ static int __set_cpus_allowed_ptr(struct task_struct *p,
> > >  				  u32 flags)
> > >  {
> > >  	const struct cpumask *cpu_valid_mask = cpu_active_mask;
> > > +	const struct cpumask *cpu_allowed_mask = task_cpu_possible_mask(p);
> > >  	unsigned int dest_cpu;
> > >  	struct rq_flags rf;
> > >  	struct rq *rq;
> > > @@ -2366,6 +2367,9 @@ static int __set_cpus_allowed_ptr(struct task_struct *p,
> > >  		 * set_cpus_allowed_common() and actually reset p->cpus_ptr.
> > >  		 */
> > >  		cpu_valid_mask = cpu_online_mask;
> > > +	} else if (!cpumask_subset(new_mask, cpu_allowed_mask)) {
> > > +		ret = -EINVAL;
> > > +		goto out;
> > >  	}
> > 
> > So what about the case where the 32bit task is in-kernel and in
> > migrate-disable ? surely we ought to still validate the new mask against
> > task_cpu_possible_mask.
> 
> That's a good question.
> 
> Given that 32-bit tasks in the kernel are running in 64-bit mode, we can
> actually tolerate them moving around arbitrarily as long as they _never_ try
> to return to userspace on a 64-bit-only CPU. I think this should be the case
> as long as we don't try to return to userspace with migration disabled, no?

Consider:

	8 CPUs, lower 4 have 32bit, higher 4 do not

	A - a 32 bit task		B

	sys_foo()
	  migrate_disable()
	  				sys_sched_setaffinity(A, 0xf0)
					  if (.. | migration_disabled(A))
					    // not checking nothing

					  __do_set_cpus_allowed();

	  migrate_enable()
	    __set_cpus_allowed(SCA_MIGRATE_ENABLE)
	      // frob outselves somewhere in 0xf0
	  sysret
	  *BOOM*


That is, I'm thinking we ought to disallow that sched_setaffinity() with
-EINVAL for 0xf0 has no intersection with 0x0f.



More information about the linux-arm-kernel mailing list