[patch 00/37] cpu/hotplug, x86: Reworked parallel CPU bringup
Thomas Gleixner
tglx at linutronix.de
Thu May 4 11:46:15 PDT 2023
Michael!
On Thu, Apr 27 2023 at 14:48, Michael Kelley wrote:
> From: Thomas Gleixner <tglx at linutronix.de> Sent: Friday, April 14, 2023 4:44 PM
>
> I smoke-tested several Linux guest configurations running on Hyper-V,
> using the "kernel/git/tglx/devel.git hotplug" tree as updated on April 26th.
> No functional issues, but encountered one cosmetic issue (details below).
>
> Configurations tested:
> * 16 vCPUs and 32 vCPUs
> * 1 NUMA node and 2 NUMA nodes
> * Parallel bring-up enabled and disabled via kernel boot line
> * "Normal" VMs and SEV-SNP VMs running with a paravisor on Hyper-V.
> This config can use parallel bring-up because most of the SNP-ness is
> hidden in the paravisor. I was glad to see this work properly.
>
> There's not much difference in performance with and without parallel
> bring-up on the 32 vCPU VM. Without parallel, the time is about 26
> milliseconds. With parallel, it's about 24 ms. So bring-up is already
> fast in the virtual environment.
Depends on the environment :)
> The cosmetic issue is in the dmesg log, and arises because Hyper-V
> enumerates SMT CPUs differently from many other environments. In
> a Hyper-V guest, the SMT threads in a core are numbered as <even, odd>
> pairs. Guest CPUs #0 & #1 are SMT threads in core, as are #2 & #3, etc. With
> parallel bring-up, here's the dmesg output:
>
> [ 0.444345] smp: Bringing up secondary CPUs ...
> [ 0.445139] .... node #0, CPUs: #2 #4 #6 #8 #10 #12 #14 #16 #18 #20 #22 #24 #26 #28 #30
> [ 0.454112] x86: Booting SMP configuration:
> [ 0.456035] #1 #3 #5 #7 #9 #11 #13 #15 #17 #19 #21 #23 #25 #27 #29 #31
> [ 0.466120] smp: Brought up 1 node, 32 CPUs
> [ 0.467036] smpboot: Max logical packages: 1
> [ 0.468035] smpboot: Total of 32 processors activated (153240.06 BogoMIPS)
>
> The function announce_cpu() is specifically testing for CPU #1 to output the
> "Booting SMP configuration" message. In a Hyper-V guest, CPU #1 is the second
> SMT thread in a core, so it isn't started until all the even-numbered CPUs are
> started.
Ah. Didn't notice that because SMT siblings are usually enumerated after
all primary ones in ACPI.
> I don't know if this cosmetic issue is worth fixing, but I thought I'd point it out.
That's trivial enough to fix. I'll amend the topmost patch before
posting V2.
Thanks for giving it a ride!
tglx
More information about the linux-riscv
mailing list