[PATCH V2] arch: patch_text: Fixup last cpu should be master
Masami Hiramatsu
mhiramat at kernel.org
Tue Mar 22 18:16:24 PDT 2022
On Sun, 13 Mar 2022 09:22:21 +0800
guoren at kernel.org wrote:
> From: Guo Ren <guoren at linux.alibaba.com>
>
> These patch_text implementations are using stop_machine_cpuslocked
> infrastructure with atomic cpu_count. The original idea: When the
> master CPU patch_text, the others should wait for it. But current
> implementation is using the first CPU as master, which couldn't
> guarantee the remaining CPUs are waiting. This patch changes the
> last CPU as the master to solve the potential risk.
>
This looks good to me. And maybe we'd better backport to stable branch
this since this can cause self-modification in wrong timing.
Reviewed-by: Masami Hiramatsu <mhiramat at kernel.org>
Thank you,
> Signed-off-by: Guo Ren <guoren at linux.alibaba.com>
> Signed-off-by: Guo Ren <guoren at kernel.org>
> Reviewed-by: Max Filippov <jcmvbkbc at gmail.com>
> Cc: Will Deacon <will at kernel.org>
> Cc: Catalin Marinas <catalin.marinas at arm.com>
> Cc: Palmer Dabbelt <palmer at dabbelt.com>
> Cc: Peter Zijlstra <peterz at infradead.org
> Cc: Masami Hiramatsu <mhiramat at kernel.org>
> Cc: Chris Zankel <chris at zankel.net>
> Cc: Arnd Bergmann <arnd at arndb.de>
>
> ---
> Changes in V2:
> - Fixup last cpu should be num_online_cpus() by Max Filippov
> - Fixup typos found by Max Filippov
> ---
> arch/arm64/kernel/patching.c | 4 ++--
> arch/csky/kernel/probes/kprobes.c | 2 +-
> arch/riscv/kernel/patch.c | 2 +-
> arch/xtensa/kernel/jump_label.c | 2 +-
> 4 files changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/arch/arm64/kernel/patching.c b/arch/arm64/kernel/patching.c
> index 771f543464e0..33e0fabc0b79 100644
> --- a/arch/arm64/kernel/patching.c
> +++ b/arch/arm64/kernel/patching.c
> @@ -117,8 +117,8 @@ static int __kprobes aarch64_insn_patch_text_cb(void *arg)
> int i, ret = 0;
> struct aarch64_insn_patch *pp = arg;
>
> - /* The first CPU becomes master */
> - if (atomic_inc_return(&pp->cpu_count) == 1) {
> + /* The last CPU becomes master */
> + if (atomic_inc_return(&pp->cpu_count) == num_online_cpus()) {
> for (i = 0; ret == 0 && i < pp->insn_cnt; i++)
> ret = aarch64_insn_patch_text_nosync(pp->text_addrs[i],
> pp->new_insns[i]);
> diff --git a/arch/csky/kernel/probes/kprobes.c b/arch/csky/kernel/probes/kprobes.c
> index 42920f25e73c..34ba684d5962 100644
> --- a/arch/csky/kernel/probes/kprobes.c
> +++ b/arch/csky/kernel/probes/kprobes.c
> @@ -30,7 +30,7 @@ static int __kprobes patch_text_cb(void *priv)
> struct csky_insn_patch *param = priv;
> unsigned int addr = (unsigned int)param->addr;
>
> - if (atomic_inc_return(¶m->cpu_count) == 1) {
> + if (atomic_inc_return(¶m->cpu_count) == num_online_cpus()) {
> *(u16 *) addr = cpu_to_le16(param->opcode);
> dcache_wb_range(addr, addr + 2);
> atomic_inc(¶m->cpu_count);
> diff --git a/arch/riscv/kernel/patch.c b/arch/riscv/kernel/patch.c
> index 0b552873a577..765004b60513 100644
> --- a/arch/riscv/kernel/patch.c
> +++ b/arch/riscv/kernel/patch.c
> @@ -104,7 +104,7 @@ static int patch_text_cb(void *data)
> struct patch_insn *patch = data;
> int ret = 0;
>
> - if (atomic_inc_return(&patch->cpu_count) == 1) {
> + if (atomic_inc_return(&patch->cpu_count) == num_online_cpus()) {
> ret =
> patch_text_nosync(patch->addr, &patch->insn,
> GET_INSN_LENGTH(patch->insn));
> diff --git a/arch/xtensa/kernel/jump_label.c b/arch/xtensa/kernel/jump_label.c
> index 61cf6497a646..b67efcd7e32c 100644
> --- a/arch/xtensa/kernel/jump_label.c
> +++ b/arch/xtensa/kernel/jump_label.c
> @@ -40,7 +40,7 @@ static int patch_text_stop_machine(void *data)
> {
> struct patch *patch = data;
>
> - if (atomic_inc_return(&patch->cpu_count) == 1) {
> + if (atomic_inc_return(&patch->cpu_count) == num_online_cpus()) {
> local_patch_text(patch->addr, patch->data, patch->sz);
> atomic_inc(&patch->cpu_count);
> } else {
> --
> 2.25.1
>
--
Masami Hiramatsu <mhiramat at kernel.org>
More information about the linux-arm-kernel
mailing list