[PATCH V2 1/3] riscv: Add Zicbop instruction definitions & cpufeature
Leonardo Bras
leobras at redhat.com
Mon Jan 8 08:14:23 PST 2024
On Mon, Jan 08, 2024 at 04:24:59PM +0100, Andrew Jones wrote:
> On Mon, Jan 08, 2024 at 11:34:15AM -0300, Leonardo Bras wrote:
> > On Fri, Jan 05, 2024 at 02:24:45PM +0100, Andrew Jones wrote:
> > > On Thu, Jan 04, 2024 at 02:43:04PM -0300, Leonardo Bras wrote:
> > > ...
> > > > > > > > I don't think we can detect a caller with non-zero offset at compile time,
> > > > > > > > since it will be used in locks which can be at (potentially) any place in
> > > > > > > > the block size. (if you have any idea though, please let me know :) )
> > > > >
> > > > > I forgot to reply to this before. The reason I think it may be possible to
> > > > > validate offset at compile time is because it must be a constant, i.e.
> > > > > __builtin_constant_p(offset) must return true. So maybe something like
> > > > >
> > > > > static_assert(__builtin_constant_p(offset) && !(offset & 0x1f))
> > > > >
> > > > > I'll try to find time to play with it.
> > > > >
> > > >
> > > > Let me know if you find anything.
> > >
> > > There's nothing we can do in this file (insn-def.h), other than maybe
> > > masking, since all magic must happen at preprocessor time, other than
> > > a tiny bit of constant arithmetic allowed at assembly time. For C, using
> > > a wrapper, like patch 2 of this series introduces, we could add the
> > > static assert above. I'll suggest that in patch 2, since I've already
> > > thought it through, but it sort of feels like overkill to me.
> >
> > It makes sense.
> >
> > >
> > > >
> > > > > > > >
> > > > > > > > On the other hand, we could create a S-Type macro which deliberately
> > > > > > > > ignores imm[4:0], like
> > > > > > > >
> > > > > > > > + INSN_S_TRUNCATE(OPCODE_OP_IMM, FUNC3(6), __RS2(3), \
> > > > > > > > + SIMM12(offset), RS1(base))
> > > > > > > >
> > > > > > > > Which saves the bits 11:5 of offset into imm[11:5], and zero-fill
> > > > > > > > imm[4:0], like
> > > > > > > >
> > > > > > > > +#define DEFINE_INSN_S \
> > > > > > > > + __DEFINE_ASM_GPR_NUMS \
> > > > > > > > +" .macro insn_s, opcode, func3, rs2, simm12, rs1\n" \
> > > > > > > > +" .4byte ((\\opcode << " __stringify(INSN_S_OPCODE_SHIFT) ") |" \
> > > > > > > > +" (\\func3 << " __stringify(INSN_S_FUNC3_SHIFT) ") |" \
> > > > > > > > +" (.L__gpr_num_\\rs2 << " __stringify(INSN_S_RS2_SHIFT) ") |" \
> > > > > > > > +" (.L__gpr_num_\\rs1 << " __stringify(INSN_S_RS1_SHIFT) ") |" \
> > > > > > > > +" (((\\simm12 >> 5) & 0x7f) << " __stringify(INSN_S_SIMM7_SHIFT) "))\n" \
> > > > > > > > +" .endm\n"
> > > > > > > > +
> > > > > > > >
> > > > > > > > Does this make sense?
> > > > > > >
> > > > > > > If we create a special version of INSN_S, then I suggest we create one
> > > > > > > where its two SIMM fields are independent and then define prefetch
> > > > > > > instructions like this
> > > > > > >
> > > > > > > #define PREFETCH_W(base, offset) \
> > > > > > > INSN_S_SPLIT_IMM(OPCODE_OP_IMM, FUNC3(6), __RS2(3), \
> > > > > > > SIMM_11_5(offset >> 5), SIMM_4_0(0), RS1(base))
> > > > > > >
> > > > > > > which would allow simple review against the spec and potentially
> > > > > > > support other instructions which use hard coded values in the
> > > > > > > immediate fields.
> > > > > > >
> > > > > >
> > > > > > I agree, it looks better this way.
> > > > > >
> > > > > > We could have:
> > > > > > INSN_S_SPLIT_IMM(OPCODE, FUNC3, RS1, RS2, SIMM_11_5, SIMM_4_0)
> > > > > >
> > > > > > and implement INSN_S like:
> > > > > > #define INSN_S(OPCODE, FUNC3, RS1, RS2, SIMM_11_0) \
> > > > > > INSN_S_SPLIT_IMM(OPCODE, FUNC3, RS1, RS2, \
> > > > > > SIMM_11_0 >> 5, SIMM_11_0 & 0x1f)
> > > > >
> > > > > That won't work since SIMM_11_0 will be a string. Actually, with
> > > > > stringification in mind, I don't think defining INSN_S_SPLIT_IMM()
> > > > > is a good idea.
> > > >
> > > > I don't see how SIMM_11_0 will be a string here. Is this due to using it
> > > > on asm code?
> > > >
> > > > I understand a user will call
> > > > ---
> > > > PREFETCH_W(base, offset), which becomes:
> > > >
> > > > INSN_S(OPCODE_OP_IMM, 6, base, 3, offset) , which becomes:
> > > >
> > > > INSN_S_SPLIT_IMM(OPCODE_OP_IMM, FUNC3(6), RS1(base), RS2(3), \
> > > > SIMM_11_5(offset >> 5), SIMM_4_0(offset & 0x1f))
> > >
> > > The other annotations, like SIMM12, stringify their arguments. So, if
> > > SIMM_11_5 and SIMM_4_0 also stringified, then it wouldn't be possible
> > > to recombine them into a simm12 for the '.insn s' directive. I suppose
> > > SIMM_11_5 and SIMM_4_0 could just expand their arguments without
> > > stringifying. With that, along with throwing even more ugly at it, then
> > > it is possible to get the end result we want, which is
> > >
> > > - PREFETCH_* instructions are defined with annotations and have a
> > > SIMM_4_0(0) in their definitions to explicitly point out that field
> > >
> > > - the INSN_S definition still directly maps to the .insn s directive
> > >
> > >
> > > I got that to work with this
> > >
> > > #define __RV_SIMM(v) v
> > > #define RV___SIMM_11_5(v) __RV_SIMM(v)
> > > #define RV___SIMM_4_0(v) __RV_SIMM(v)
> > >
> > > #define __INSN_S_SPLIT_IMM(opcode, func3, rs2, simm12, rs1) \
> > > INSN_S(opcode, func3, rs2, SIMM12(simm12), rs1)
> > >
> > > #define INSN_S_SPLIT_IMM(opcode, func3, rs2, simm_11_5, simm_4_0, rs1) \
> > > __INSN_S_SPLIT_IMM(opcode, func3, rs2, (RV_##simm_11_5 << 5) | RV_##simm_4_0, rs1)
> > >
> > > #define CBO_PREFETCH_W(base, offset) \
> > > INSN_S_SPLIT_IMM(OPCODE_OP_IMM, FUNC3(6), __RS2(3), \
> > > __SIMM_11_5((offset) >> 5), __SIMM_4_0(0), RS1(base))
> > >
> > >
> > > But, again, I feel like it's probably overkill...
> >
> > I though our intention was to avoid the extra IMM masking in asm, while
> > keeping the 5 lower bits zeroed at all times.
> >
> > But IIUC here you are writing a insn_s_split_imm in terms of a plain
> > insn_s, which guarantees the zeroed 5 lower bits but still does an
> > unnecessaty masking in asm. In fact, if we use the split version, it
> > concatenates the two IMM parts, to then split them again in order to build
> > the instruction.
>
> That's backwards.
>
> INSN_S should map to its namesake directive '.insn s', which takes the
> immediate as a single simm12. simm7 and simm5 are only used in the
> fallback path.
Wait, but is not the final instruction build with a split simm7 and simm5 ?
Oh, ok, we are probably creating an asm with mnemonics first, and leaving
it to the assembler to split the simm7 and simm5 from simm12. It does make
sense.
Sorry for the confusion, I was wrongly thinking on terms of final bytecode.
> Also, it's likely few instructions care about the split.
> Other S-type instructions would want to treat the immediate as 12 bits,
> so INSN_S should not be written in terms of INSN_S_SPLIT_IMM, since we'd
> split a 12-bit immediate for those instructions just to have it merged
> again for the .insn s directive.
Yeah, makes sense now.
>
> >
> > In my suggestion above, we make INSN_S_SPLIT_IMM() the helper / standard
> > way to write an s-type and write a INSN_S() in terms of the SPLIT version.
> >
> > This allows using the split version when we don't need one of the halfs,
> > thus avoiding a masking or a rotation. The full version just splits the
> > halfs and pass to the split version that directly builds the instruction.
> >
> > Well, I think I was under the wrong impression that we wanted to avoid the
> > rotation and masking, but I just noticed that we are just dealing with
> > splitting the offset, which is supposed to be a constant during the
> > generation of the instruction, so we can just guarantee the value being
> > masked at no runtime cost.
>
> Right, there's no reason to avoid the rotation and masking with respect to
> performance, as it's all done at compile time. Avoiding the operations as
> much as possible is nice, though, since they're ugly and, with macro
> arguments, getting the parentheses right to ensure the correct order of
> operations is error prone.
>
> >
> > So in the end we are just thinking on how it could look better to the user,
> > and maybe the split version is unnecessary if the user guarantees the
> > masking to be correct. But if we are going to have it, I suggest we do
> > INSN_S in terms of INSN_S_SPLIT_IMM() instead of the other way around.
>
> Yes, the goal was to have __SIM_4_0(0) for the prefetch instructions to
> make it simple to review against the spec, but ((offset) & ~0x1f) is also
> simple to review.
Agreed
>
> For the next revision of this series, I'd be happy with just the masking
> or even to just leave it up to the callers as this version does.
>
> Thanks,
> drew
>
Thank you!
Leo
More information about the linux-riscv
mailing list