[PATCH resend 2/2] arm64: assembler: add macros to conditionally yield the NEON under PREEMPT

Dave Martin Dave.Martin at arm.com
Wed Mar 28 10:18:13 PDT 2018


On Wed, Mar 28, 2018 at 02:41:29PM +0200, Ard Biesheuvel wrote:
> Add support macros to conditionally yield the NEON (and thus the CPU)
> that may be called from the assembler code.
> 
> In some cases, yielding the NEON involves saving and restoring a non
> trivial amount of context (especially in the CRC folding algorithms),
> and so the macro is split into three, and the code in between is only
> executed when the yield path is taken, allowing the context to be preserved.
> The third macro takes an optional label argument that marks the resume
> path after a yield has been performed.

Minor comments below, mostly just suggestions/observations.

With the missing #include in asm-offsets.c fixed (if you think it's
appropriate):

Reviewed-by: Dave Martin <Dave.Martin at arm.com>

> Signed-off-by: Ard Biesheuvel <ard.biesheuvel at linaro.org>
> ---
>  arch/arm64/include/asm/assembler.h | 64 ++++++++++++++++++++
>  arch/arm64/kernel/asm-offsets.c    |  2 +
>  2 files changed, 66 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
> index d354eb7f2f0c..fb11514273d9 100644
> --- a/arch/arm64/include/asm/assembler.h
> +++ b/arch/arm64/include/asm/assembler.h
> @@ -623,4 +623,68 @@ USER(\label, ic	ivau, \tmp2)			// invalidate I line PoU
>  	.endif
>  	.endm
>  
> +/*
> + * Check whether to yield to another runnable task from kernel mode NEON code
> + * (which runs with preemption disabled).
> + *
> + * if_will_cond_yield_neon
> + *        // pre-yield patchup code
> + * do_cond_yield_neon
> + *        // post-yield patchup code
> + * endif_yield_neon    <label>
> + *
> + * where <label> is optional, and marks the point where execution will resume
> + * after a yield has been performed. If omitted, execution resumes right after
> + * the endif_yield_neon invocation.

Maybe add a comment describing cond_yield_neon, e.g.:

 *
 * As a convenience, in the case where no patchup code is required
 * the above sequence may be abbreviated to:
 *
 * cond_yield_neon <label>

> + *
> + * Note that the patchup code does not support assembler directives that change
> + * the output section, any use of such directives is undefined.
> + *
> + * The yield itself consists of the following:
> + * - Check whether the preempt count is exactly 1, in which case disabling
> + *   preemption once will make the task preemptible. If this is not the case,
> + *   yielding is pointless.
> + * - Check whether TIF_NEED_RESCHED is set, and if so, disable and re-enable
> + *   kernel mode NEON (which will trigger a reschedule), and branch to the
> + *   yield fixup code.
> + *
> + * This macro sequence clobbers x0, x1 and the flags register unconditionally,
> + * and may clobber x2 .. x18 if the yield path is taken.
> + */

Does this mean that the pre-yield patchup code can safely refer to
x2..x18, but the post-yield patchup code and the code at <label> (or
otherwise immediately following endif_yield_neon) can't?

> +
> +	.macro		cond_yield_neon, lbl
> +	if_will_cond_yield_neon
> +	do_cond_yield_neon
> +	endif_yield_neon	\lbl
> +	.endm
> +
> +	.macro		if_will_cond_yield_neon
> +#ifdef CONFIG_PREEMPT
> +	get_thread_info	x0
> +	ldr		w1, [x0, #TSK_TI_PREEMPT]
> +	ldr		x0, [x0, #TSK_TI_FLAGS]
> +	cmp		w1, #PREEMPT_DISABLE_OFFSET
> +	csel		x0, x0, xzr, eq
> +	tbnz		x0, #TIF_NEED_RESCHED, .Lyield_\@	// needs rescheduling?
> +#endif
> +	/* fall through to endif_yield_neon */
> +	.subsection	1

Can we junk the code in this case rather than including it in the
kernel, like

	.section .discard.cond_yield_neon

(this seems to conform to some notion of a standard discarded section
name, see <asm-generic/vmlinux.lds.h>).  This additionally discards
the do_cond_yield_neon invocation (which I guess is what we'd expect
for a non-preemptible kernel?)

If doing that discard, a note could be added in the comment block
to warn people not to assume that the patchup code and any labels
defined in it will definitely end up in the kernel image.

Since the patchup sequences aren't likely to be many or large, it's
not the end of the world if we don't do this discarding though.

> +.Lyield_\@ :
> +	.endm
> +
> +	.macro		do_cond_yield_neon
> +	bl		kernel_neon_end
> +	bl		kernel_neon_begin
> +	.endm
> +
> +	.macro		endif_yield_neon, lbl
> +	.ifnb		\lbl
> +	b		\lbl
> +	.else
> +	b		.Lyield_out_\@
> +	.endif

Should you include

	.purgem do_cond_yield_neon
	.purgem endif_yield_neon

here?

Otherwise, I think you would get macro redefinition errors if
if_will_cond_yield_neon is used more than once in a given file.

You could maybe protect against nested and misordered macro uses by the
following, though it feels a bit like overkill.  Alternatively you
could use a magic symbol to record the current state, similarly to
frame_{push,pop}.

	.macro __if_will_cond_yield_neon
	.purgem if_will_cond_yield_neon
	//...

	.macro do_cond_yield_neon
	.purgem do_cond_yield_neon
	//...

	.macro endif_yield_neon
	.purgem endif_yield_neon
	//...

	.macro if_will_cond_yield_neon
	__if_will_cond_yield_neon
	.endm // if_will_cond_yield_neon
	.endm // endif_yield_neon
	.endm // do_cond_yield_neon
	.endm // __if_will_cond_yield_neon

	.macro if_will_cond_yield_neon
	__if_will_cond_yield_neon
	.endm

> +	.previous
> +.Lyield_out_\@ :
> +	.endm
> +
>  #endif	/* __ASM_ASSEMBLER_H */
> diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
> index 1303e04110cd..1e2ea2e51acb 100644
> --- a/arch/arm64/kernel/asm-offsets.c
> +++ b/arch/arm64/kernel/asm-offsets.c
> @@ -93,6 +93,8 @@ int main(void)
>    DEFINE(DMA_TO_DEVICE,		DMA_TO_DEVICE);
>    DEFINE(DMA_FROM_DEVICE,	DMA_FROM_DEVICE);
>    BLANK();

#include <linux/preempt.h> ?

> +  DEFINE(PREEMPT_DISABLE_OFFSET, PREEMPT_DISABLE_OFFSET);
> +  BLANK();

[...]

Cheers
---Dave



More information about the linux-arm-kernel mailing list