[PATCH 05/27] arc: TCG instruction generator and hand-definitions

Richard Henderson richard.henderson at linaro.org
Wed Apr 7 04:52:26 BST 2021


On 4/5/21 7:31 AM, cupertinomiranda at gmail.com wrote:
> From: Cupertino Miranda <cmiranda at synopsys.com>
> 
> Add the most generic parts of TCG constructions. It contains the
> basic infrastructure for fundamental ARC features, such as
> ZOL (zero overhead loops) and delay-slots.
> Also includes hand crafted TCG for more intricate instructions, such
> as vector instructions.
> 
> Signed-off-by: Shahab Vahedi <shahab at synopsys.com>

Procedure is that you also sign off on the patches you post.

> +#define REG(x)  (cpu_r[x])

Does this really provide any clarity to the code?

> +static inline bool use_goto_tb(const DisasContext *dc, target_ulong dest)

Drop all of the unnecessary inline markers.
The compiler will decide for itself just fine.

> +void gen_goto_tb(const DisasContext *ctx, int n, TCGv dest)
> +{
> +    tcg_gen_mov_tl(cpu_pc, dest);
> +    tcg_gen_andi_tl(cpu_pcl, dest, ~((target_ulong) 3));
> +    if (ctx->base.singlestep_enabled) {
> +        gen_helper_debug(cpu_env);
> +    } else {
> +        tcg_gen_exit_tb(NULL, 0);
> +    }
> +}

This doesn't use goto_tb, so perhaps a better name?

It also doesn't use tcg_gen_lookup_and_goto_ptr(), so it could be improved.

> +static void gen_gotoi_tb(const DisasContext *ctx, int n, target_ulong dest)
> +{
> +    if (use_goto_tb(ctx, dest)) {
> +        tcg_gen_goto_tb(n);
> +        tcg_gen_movi_tl(cpu_pc, dest);
> +        tcg_gen_movi_tl(cpu_pcl, dest & (~((target_ulong) 3)));
> +        tcg_gen_exit_tb(ctx->base.tb, n);
> +    } else {
> +        tcg_gen_movi_tl(cpu_pc, dest);
> +        tcg_gen_movi_tl(cpu_pcl, dest & (~((target_ulong) 3)));
> +        if (ctx->base.singlestep_enabled) {
> +            gen_helper_debug(cpu_env);
> +        }
> +        tcg_gen_exit_tb(NULL, 0);

Forgot the else, as above.  Also not using lookup.

> +    for (i = 0; i < 64; i++) {
> +        char name[16];
> +
> +        sprintf(name, "r[%d]", i);
> +        cpu_r[i] = tcg_global_mem_new(cpu_env,
> +                                      ARC_REG_OFFS(r[i]),
> +                                      strdup(name));

g_strdup_printf, and drop the local variable.

> +}
> +static void arc_tr_tb_start(DisasContextBase *dcbase, CPUState *cpu)

Mind the whitespace.

> +{
> +    /* place holder for now */
> +}
> +
> +static void arc_tr_insn_start(DisasContextBase *dcbase, CPUState *cpu)
> +{
> +    DisasContext *dc = container_of(dcbase, DisasContext, base);
> +
> +
> +    tcg_gen_insn_start(dc->base.pc_next);

Similarly.

> +static int arc_gen_INVALID(const DisasContext *ctx)
> +{
> +    qemu_log_mask(LOG_UNIMP,
> +                  "invalid inst @:%08x\n", ctx->cpc);
> +    return DISAS_NEXT;
> +}

You need to generate an exception here.

> +    buffer[0] = cpu_lduw_code(ctx->env, ctx->cpc);

translator_lduw.

> +    length = arc_insn_length(buffer[0], cpu->family);
> +
> +    switch (length) {
> +    case 2:
> +        /* 16-bit instructions. */
> +        insn = (uint64_t) buffer[0];
> +        break;
> +    case 4:
> +        /* 32-bit instructions. */
> +        buffer[1] = cpu_lduw_code(ctx->env, ctx->cpc + 2);
> +        uint32_t buf = (buffer[0] << 16) | buffer[1];

Why use the array buffer[] and not a couple of scalars?

> +    if (ctx->insn.limm_p) {
> +        ctx->insn.limm = ARRANGE_ENDIAN(true,
> +                                        cpu_ldl_code(ctx->env,

translator_ldl

> +/*
> + * Throw "misaligned" exception if 'addr' is not 32-bit aligned.
> + * This check is done irrelevant of status32.AD bit.
> + */
> +static void check_addr_is_word_aligned(const DisasCtxt *ctx,
> +                                       TCGv addr)
> +{
> +    TCGLabel *l1 = gen_new_label();
> +    TCGv tmp = tcg_temp_local_new();
> +
> +    tcg_gen_andi_tl(tmp, addr, 0x3);
> +    tcg_gen_brcondi_tl(TCG_COND_EQ, tmp, 0, l1);
> +
> +    tcg_gen_mov_tl(cpu_efa, addr);
> +    tcg_gen_movi_tl(cpu_eret, ctx->cpc);
> +    tcg_gen_mov_tl(cpu_erbta, cpu_bta);
> +
> +    TCGv tcg_index = tcg_const_tl(EXCP_MISALIGNED);
> +    TCGv tcg_cause = tcg_const_tl(0x0);
> +    TCGv tcg_param = tcg_const_tl(0x0);
> +
> +    gen_helper_raise_exception(cpu_env, tcg_index, tcg_cause, tcg_param);
> +
> +    gen_set_label(l1);

So.. you shouldn't have to do anything like this.

As far as I can see, there's nothing special about enter/leave.  All memory ops 
are supposed to be aligned (with double-word ops treated as two word ops for 
alignment).

So you should be implementing TCGCPUOps.do_unaligned_access, and set 
TARGET_ALIGNED_ONLY in default-configs/.

Quite large this patch.  I'll get back to it later...


r~



More information about the linux-snps-arc mailing list