[patch 14/19] softirq: Make softirq control and processing RT aware

Frederic Weisbecker frederic at kernel.org
Thu Nov 19 19:26:21 EST 2020


On Fri, Nov 13, 2020 at 03:02:21PM +0100, Thomas Gleixner wrote:
> +void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
> +{
> +	unsigned long flags;
> +	int newcnt;
> +
> +	WARN_ON_ONCE(in_hardirq());
> +
> +	/* First entry of a task into a BH disabled section? */
> +	if (!current->softirq_disable_cnt) {
> +		if (preemptible()) {
> +			local_lock(&softirq_ctrl.lock);
> +			rcu_read_lock();

Ah you lock RCU because local_bh_disable() implies it and
since it doesn't disable preemption anymore, you must do it
explicitly?

Perhaps local_lock() should itself imply rcu_read_lock() ?

> +		} else {
> +			DEBUG_LOCKS_WARN_ON(this_cpu_read(softirq_ctrl.cnt));
> +		}
> +	}
> +
> +	preempt_disable();

Do you really need to disable preemption here? Migration is disabled by local_lock()
and I can't figure out a scenario where the below can conflict with a preempting task.

> +	/*
> +	 * Track the per CPU softirq disabled state. On RT this is per CPU
> +	 * state to allow preemption of bottom half disabled sections.
> +	 */
> +	newcnt = this_cpu_add_return(softirq_ctrl.cnt, cnt);

__this_cpu_add_return() ?

> +	/*
> +	 * Reflect the result in the task state to prevent recursion on the
> +	 * local lock and to make softirq_count() & al work.
> +	 */
> +	current->softirq_disable_cnt = newcnt;
> +
> +	if (IS_ENABLED(CONFIG_TRACE_IRQFLAGS) && newcnt == cnt) {
> +		raw_local_irq_save(flags);
> +		lockdep_softirqs_off(ip);
> +		raw_local_irq_restore(flags);
> +	}
> +	preempt_enable();
> +}
> +EXPORT_SYMBOL(__local_bh_disable_ip);
> +
> +static void __local_bh_enable(unsigned int cnt, bool unlock)
> +{
> +	unsigned long flags;
> +	int newcnt;
> +
> +	DEBUG_LOCKS_WARN_ON(current->softirq_disable_cnt !=
> +			    this_cpu_read(softirq_ctrl.cnt));

__this_cpu_read() ? Although that's lockdep only so not too important.

> +
> +	preempt_disable();

Same question about preempt_disable().

> +	if (IS_ENABLED(CONFIG_TRACE_IRQFLAGS) && softirq_count() == cnt) {
> +		raw_local_irq_save(flags);
> +		lockdep_softirqs_on(_RET_IP_);
> +		raw_local_irq_restore(flags);
> +	}
> +
> +	newcnt = this_cpu_sub_return(softirq_ctrl.cnt, cnt);

__this_cpu_sub_return?

> +	current->softirq_disable_cnt = newcnt;
> +	preempt_enable();
> +
> +	if (!newcnt && unlock) {
> +		rcu_read_unlock();
> +		local_unlock(&softirq_ctrl.lock);
> +	}
> +}

Thanks.



More information about the linux-arm-kernel mailing list