[EXTERNAL]Re: [PATCH v2] We add a new configuration CONFIG_SBI_SCRATCH_ALLOC_ALIGNMENT as an int
Raj Vishwanathan
rvishwanathan at mips.com
Mon Jan 20 10:56:06 PST 2025
Xiang W.
Sounds good. Will update it the text in Kconfig. The reason for 64 bytes was to align it with the cacheline size.
Raj
-----Original Message-----
From: Xiang W <wxjstz at 126.com>
Sent: Sunday, January 19, 2025 7:58 PM
To: Raj Vishwanathan <raj.vishwanathan at gmail.com>; opensbi at lists.infradead.org
Cc: Raj Vishwanathan <rvishwanathan at mips.com>
Subject: [EXTERNAL]Re: [PATCH v2] We add a new configuration CONFIG_SBI_SCRATCH_ALLOC_ALIGNMENT as an int
[You don't often get email from wxjstz at 126.com. Learn why this is important at https://aka.ms/LearnAboutSenderIdentification ]
在 2025-01-19日的 12:04 -0800,Raj Vishwanathan写道:
> If it is set to 0 or not defined, we will continue with the previous
> defintion of allocating pointer size chunks. Otherwise we will use the
> chacheline size of 64.
> We have the option of increasing the scratch allocation alignment to
> 64 bytes as the cache line size, to avoid two atomic variables from
> the same cache line that may cause livelock on some platforms.
> ---
> lib/sbi/Kconfig | 8 ++++++++
> lib/sbi/sbi_scratch.c | 18 ++++++++++++++++--
> 2 files changed, 24 insertions(+), 2 deletions(-)
>
> diff --git a/lib/sbi/Kconfig b/lib/sbi/Kconfig index c6cc04b..1ffa170
> 100644
> --- a/lib/sbi/Kconfig
> +++ b/lib/sbi/Kconfig
> @@ -69,4 +69,12 @@ config SBI_ECALL_SSE config SBI_ECALL_MPXY
> bool "MPXY extension"
> default y
> +config SBI_SCRATCH_ALLOC_ALIGNMENT
> + int "Scratch allocation alignment"
> + default 0
> + help
> + We provide the option for allocation alignment to 64 bytes to avoid
> + livelock on certain platforms due to atomic variables from the same
> + cache line. Leave it 0 for default behaviour.
Don't stress 64 bytes specifically
Customise the size of the alignment to allocate from the extra space in sbi_scratch. Leave it 0 for default behaviour
Regards,
Xiang W
> +
> endmenu
> diff --git a/lib/sbi/sbi_scratch.c b/lib/sbi/sbi_scratch.c index
> ccbbc68..88ea3c7 100644
> --- a/lib/sbi/sbi_scratch.c
> +++ b/lib/sbi/sbi_scratch.c
> @@ -14,6 +14,13 @@
> #include <sbi/sbi_scratch.h>
> #include <sbi/sbi_string.h>
>
> +#if !defined(CONFIG_SBI_SCRATCH_ALLOC_ALIGNMENT) ||
> +(CONFIG_SBI_SCRATCH_ALLOC_ALIGNMENT==0)
> +#define SCRATCH_ALLOC_ALIGNMENT __SIZEOF_POINTER__ #else #define
> +SCRATCH_ALLOC_ALIGNMENT CONFIG_SBI_SCRATCH_ALLOC_ALIGNMENT
> +#endif
> +
> +
> u32 last_hartindex_having_scratch = 0;
> u32 hartindex_to_hartid_table[SBI_HARTMASK_MAX_BITS + 1] = { -1U };
> struct sbi_scratch *hartindex_to_scratch_table[SBI_HARTMASK_MAX_BITS +
> 1] = { 0 }; @@ -70,8 +77,15 @@ unsigned long sbi_scratch_alloc_offset(unsigned long size)
> if (!size)
> return 0;
>
> - size += __SIZEOF_POINTER__ - 1;
> - size &= ~((unsigned long)__SIZEOF_POINTER__ - 1);
> + /*
> + * We let the allocation align to the cache line size, so
> + * certain platforms due to atomic variables from the same cache line.
> + * This ensures that the LR/SC variables are in a separate cache line
> + * to avoid live lock.
> + */
> +
> + size += SCRATCH_ALLOC_ALIGNMENT- 1;
> + size &= ~((unsigned long)SCRATCH_ALLOC_ALIGNMENT - 1);
>
> spin_lock(&extra_lock);
>
> --
> 2.43.0
>
>
More information about the opensbi
mailing list