[PATCH bpf-next 2/2] arm64, bpf: support more atomic operations

John Fastabend john.fastabend at gmail.com
Thu Jan 27 18:43:53 PST 2022


Hou Tao wrote:
> Hi,
> 
> On 1/27/2022 2:06 PM, John Fastabend wrote:
> > Hou Tao wrote:
> >> Atomics for eBPF patch series adds support for atomic[64]_fetch_add,
> >> atomic[64]_[fetch_]{and,or,xor} and atomic[64]_{xchg|cmpxchg}, but
> >> it only add support for x86-64, so support these atomic operations
> >> for arm64 as well.
> >>
> >> +static int emit_lse_atomic(const struct bpf_insn *insn, struct jit_ctx *ctx)
> >> +{
> >> +	const u8 code = insn->code;
> >> +	const u8 dst = bpf2a64[insn->dst_reg];
> >> +	const u8 src = bpf2a64[insn->src_reg];
> >> +	const u8 tmp = bpf2a64[TMP_REG_1];
> >> +	const u8 tmp2 = bpf2a64[TMP_REG_2];
> >> +	const bool isdw = BPF_SIZE(code) == BPF_DW;
> >> +	const s16 off = insn->off;
> >> +	u8 reg;
> >> +
> >> +	if (!off) {
> >> +		reg = dst;
> >> +	} else {
> >> +		emit_a64_mov_i(1, tmp, off, ctx);
> >> +		emit(A64_ADD(1, tmp, tmp, dst), ctx);
> >> +		reg = tmp;
> >> +	}
> >> +
> >> +	switch (insn->imm) {
> > Diff'ing X86 implementation which has a BPF_SUB case how is it avoided
> > here?
> I think it is just left over from patchset [1], because according to the LLVM
> commit [2]
> __sync_fetch_and_sub(&addr, value) is implemented by __sync_fetch_and_add(&addr,
> -value).
> I will post a patch to remove it.

OK in that case LGTM with the caveat not an ARM expert.

Acked-by: John Fastabend <john.fastabend at gmail.com>

[...]

> >> +	default:
> >> +		pr_err_once("unknown atomic op code %02x\n", insn->imm);
> >> +		return -EINVAL;
> > Was about to suggest maybe EFAULT to align with x86, but on second
> > thought seems arm jit uses EINVAL more universally so best to be
> > self consistent. Just an observation.
> OK. So I will still return -EINVAL for invalid atomic operation.

Sounds good to me.

> >
> >> +	}
> >> +
> >> +	return 0;
> >> +}
> >> +
> > .
> 





More information about the linux-arm-kernel mailing list