[bootwrapper PATCH v2 05/13] aarch64: add mov_64 macro
Andre Przywara
andre.przywara at arm.com
Fri Jan 14 07:50:32 PST 2022
On Fri, 14 Jan 2022 10:56:45 +0000
Mark Rutland <mark.rutland at arm.com> wrote:
Hi Mark,
> In subsequent patches we'll need to load 64-bit values into GPRs before
> the CPU is in a known endianness, where we cannot use literal pools.
>
> In preparation for that, this patch adds a new `mov_64` macro to load a
> 64-bit value into a GPR using a sequence of MOV and MOVKs, which will
> function the same regardless of the CPU's endianness.
>
> At the same time, move the `cpuid` macro to use `mov_64` internally.
>
> Signed-off-by: Mark Rutland <mark.rutland at arm.com>
Not sure that's worth it, but there is a simpler version of that TF-A
macro, along the lines of:
.if ((\val) >> 16) & 0xffff
movk \dest, #(((\val) >> 16) & 0xffff), lsl #16
.endif
It's a bit less optimal in a corner case than the TF-A version, but avoids
the few pointless " movk x0, #0x0, lsl #..." I found in the code.
But that's an optimisation detail, so anyway:
Reviewed-by: Andre Przywara <andre.przywara at arm.com>
Cheers,
Andre
> ---
> arch/aarch64/common.S | 10 +++++++++-
> 1 file changed, 9 insertions(+), 1 deletion(-)
>
> diff --git a/arch/aarch64/common.S b/arch/aarch64/common.S
> index c7171a9..3279fa9 100644
> --- a/arch/aarch64/common.S
> +++ b/arch/aarch64/common.S
> @@ -9,9 +9,17 @@
>
> #include <cpu.h>
>
> + /* Load a 64-bit value using immediates */
> + .macro mov_64 dest, val
> + mov \dest, #(((\val) >> 0) & 0xffff)
> + movk \dest, #(((\val) >> 16) & 0xffff), lsl #16
> + movk \dest, #(((\val) >> 32) & 0xffff), lsl #32
> + movk \dest, #(((\val) >> 48) & 0xffff), lsl #48
> + .endm
> +
> /* Put MPIDR into \dest, clobber \tmp and flags */
> .macro cpuid dest, tmp
> mrs \dest, mpidr_el1
> - ldr \tmp, =MPIDR_ID_BITS
> + mov_64 \tmp, MPIDR_ID_BITS
> ands \dest, \dest, \tmp
> .endm
More information about the linux-arm-kernel
mailing list