[PATCH 3/3] irqchip/gic-v3-its: Limit memreserve cpuhp state lifetime
Valentin Schneider
valentin.schneider at arm.com
Sun Oct 24 08:52:10 PDT 2021
On 23/10/21 11:37, Marc Zyngier wrote:
> On Fri, 22 Oct 2021 11:33:07 +0100,
> Valentin Schneider <valentin.schneider at arm.com> wrote:
>> @@ -5234,6 +5243,11 @@ static int its_cpu_memreserve_lpi(unsigned int cpu)
>> paddr = page_to_phys(pend_page);
>> WARN_ON(gic_reserve_range(paddr, LPI_PENDBASE_SZ));
>>
>> +out:
>> + /* This only needs to run once per CPU */
>> + if (cpumask_equal(&cpus_booted_once_mask, cpu_possible_mask))
>> + schedule_work(&rdist_memreserve_cpuhp_cleanup_work);
>
> Which makes me wonder. Do we actually need any flag at all if all we
> need to check is whether the CPU has been through the callback at
> least once? I have the strong feeling that we are tracking the same
> state multiple times here.
>
Agreed, cf. my reply on 2/3.
> Also, could the cpuhp callbacks ever run concurrently? If they could,
> two CPUs could schedule the cleanup work in parallel, with interesting
> results. You'd need a cmpxchg on the cpuhp state in the workfn.
>
So I think the cpuhp callbacks may run concurrently, but at a quick glance
it seems like we can't get two instances of the same work executing
concurrently: schedule_work()->queue_work() doesn't re-queue a work if it's already
pending, and __queue_work() checks a work's previous pool in case it might
still be running there.
Regardless, that's one less thing to worry about if we make the cpuhp
callback body run at most once on each CPU (only a single CPU will be able
to queue the removal work).
More information about the linux-arm-kernel
mailing list