[PATCH v2 4/8] lib: sbi_hart: return error when insufficient PMP entries available

Anup Patel anup at brainfault.org
Sun Nov 2 02:55:34 PST 2025


On Wed, Oct 8, 2025 at 2:14 PM Yu-Chien Peter Lin <peter.lin at sifive.com> wrote:
>
> Previously, when memory regions exceed available PMP entries,
> some regions were silently ignored. If the last entry that covers
> the full 64-bit address space is not added to a domain, the next
> stage S-mode software won't have permission to access and fetch
> instructions from its memory. So return early with error message
> to catch such situation.
>
> Signed-off-by: Yu-Chien Peter Lin <peter.lin at sifive.com>
> ---
>  lib/sbi/sbi_hart.c | 22 ++++++++++++++++------
>  1 file changed, 16 insertions(+), 6 deletions(-)
>
> diff --git a/lib/sbi/sbi_hart.c b/lib/sbi/sbi_hart.c
> index d018619b..032f7dc1 100644
> --- a/lib/sbi/sbi_hart.c
> +++ b/lib/sbi/sbi_hart.c
> @@ -324,6 +324,16 @@ static void sbi_hart_smepmp_set(struct sbi_scratch *scratch,
>         }
>  }
>
> +static bool is_valid_pmp_idx(unsigned int pmp_count, unsigned int pmp_idx)
> +{
> +       if (pmp_count > pmp_idx)
> +               return true;
> +
> +       sbi_printf("ERR: insufficient PMP entries\n");
> +

Redundant newline here. Also, "error:" is better than "ERR:".
I will take care of this at the time of merging this patch.

Reviewed-by: Anup Patel <anup at brainfault.org>

Thanks,
Anup

> +       return false;
> +}
> +
>  static int sbi_hart_smepmp_configure(struct sbi_scratch *scratch,
>                                      unsigned int pmp_count,
>                                      unsigned int pmp_log2gran,
> @@ -348,8 +358,8 @@ static int sbi_hart_smepmp_configure(struct sbi_scratch *scratch,
>                 /* Skip reserved entry */
>                 if (pmp_idx == SBI_SMEPMP_RESV_ENTRY)
>                         pmp_idx++;
> -               if (pmp_count <= pmp_idx)
> -                       break;
> +               if (!is_valid_pmp_idx(pmp_count, pmp_idx))
> +                       return SBI_EFAIL;
>
>                 /* Skip shared and SU-only regions */
>                 if (!SBI_DOMAIN_MEMREGION_M_ONLY_ACCESS(reg->flags)) {
> @@ -372,8 +382,8 @@ static int sbi_hart_smepmp_configure(struct sbi_scratch *scratch,
>                 /* Skip reserved entry */
>                 if (pmp_idx == SBI_SMEPMP_RESV_ENTRY)
>                         pmp_idx++;
> -               if (pmp_count <= pmp_idx)
> -                       break;
> +               if (!is_valid_pmp_idx(pmp_count, pmp_idx))
> +                       return SBI_EFAIL;
>
>                 /* Skip M-only regions */
>                 if (SBI_DOMAIN_MEMREGION_M_ONLY_ACCESS(reg->flags)) {
> @@ -407,8 +417,8 @@ static int sbi_hart_oldpmp_configure(struct sbi_scratch *scratch,
>         unsigned long pmp_addr;
>
>         sbi_domain_for_each_memregion(dom, reg) {
> -               if (pmp_count <= pmp_idx)
> -                       break;
> +               if (!is_valid_pmp_idx(pmp_count, pmp_idx))
> +                       return SBI_EFAIL;
>
>                 pmp_flags = 0;
>
> --
> 2.48.0
>



More information about the opensbi mailing list