[PATCH] Fix interrupt distribution in ppc970
Milton Miller
miltonm at bga.com
Sun Jun 10 21:58:10 EDT 2007
On Jun 6, 2007, at 6:31 AM, Mohan Kumar M wrote:
> I updated the patch with correct tab spacing and removed unnecessary
> "else".
> In some of the PPC970 based systems, interrupt would be distributed to
> offline cpus also even when booted with "maxcpus=1". So check whether
> cpu online map and cpu present map are equal or not. If they are equal
> default_distrib_server is used as interrupt server otherwise boot cpu
> (default_server) used as interrupt server.
>
> In addition to this, if an interrupt is assigned to a specific cpu (ie
> smp affinity) and if that cpu is not online, the earlier code used to
> return the default_distrib_server as interrupt server. This patch
> introduces an additional paramter to the get_irq function ie
> strict_check, based on this parameter, if the cpu is not online either
> default_distrib_server or -1 is returned.
The code is structured cleanly. However, when testing this patch, I
found (1) you printed the mask as a cpulist instead of a cpumask.
Since the user writes a cpumask to /proc/irq/xx/smp_affinity, it would
make more sense to print a mask in the error message.
However, this is all mute because (2) the common in /kenrel/irq/proc.c
checks that a cpu in the mask is online and returns -EINVAL to the user
without calling the ->set_affinity hook (we have no select_smp_affinity
hook arch code). Unless there is another path to call ->set_affinity,
we can only trigger the case of no online cpu by racing between setting
the affinity and taking a cpu offline.
Does anyone know of another path to set the affinity? If not I would
remove this extra logic and change the behavior from ignore to set to
default server.
milton
> #ifdef CONFIG_SMP
> -static int get_irq_server(unsigned int virq)
> +static int get_irq_server(unsigned int virq, unsigned int
> strict_check)
> {
> - unsigned int server;
> + int server;
> /* For the moment only implement delivery to all cpus or one cpu */
> cpumask_t cpumask = irq_desc[virq].affinity;
> cpumask_t tmp = CPU_MASK_NONE;
> @@ -166,22 +166,25 @@ static int get_irq_server(unsigned int v
> if (!distribute_irqs)
> return default_server;
>
> - if (cpus_equal(cpumask, CPU_MASK_ALL)) {
> - server = default_distrib_server;
> - } else {
> + if (!cpus_equal(cpumask, CPU_MASK_ALL)) {
> cpus_and(tmp, cpu_online_map, cpumask);
>
> - if (cpus_empty(tmp))
> - server = default_distrib_server;
> - else
> - server = get_hard_smp_processor_id(first_cpu(tmp));
> + server = first_cpu(tmp);
> +
> + if (server < NR_CPUS)
> + return get_hard_smp_processor_id(server);
> +
> + if (strict_check)
> + return -1;
> }
>
> - return server;
> + if (cpus_equal(cpu_online_map, cpu_present_map))
> + return default_distrib_server;
>
> + return default_server;
> }
>
...
> + /*
> + * For the moment only implement delivery to all cpus or one cpu.
> + * Get current irq_server for the given irq
> + */
> + irq_server = get_irq_server(irq, 1);
> + if (irq_server == -1) {
> + char cpulist[128];
> + cpulist_scnprintf(cpulist, sizeof(cpulist), cpumask);
> + printk(KERN_WARNING "xics_set_affinity: No online cpus in "
> + "the mask %s for irq %d\n", cpulist, virq);
> + return;
> }
More information about the kexec
mailing list