[PATCH] lib: sbi: Flush cache entries after writing PMP CSRs

Anup Patel apatel at ventanamicro.com
Thu Oct 30 20:49:43 PDT 2025


On Thu, Oct 30, 2025 at 4:06 PM Samuel Holland
<samuel.holland at sifive.com> wrote:
>
> Hi Anup,
>
> On 2025-10-30 12:50 AM, Anup Patel wrote:
> > On Thu, Oct 30, 2025 at 4:02 AM Samuel Holland
> > <samuel.holland at sifive.com> wrote:
> >> On 2025-10-19 11:36 PM, Anup Patel wrote:
> >>> On Thu, Sep 4, 2025 at 4:26 PM <cp0613 at linux.alibaba.com> wrote:
> >>>>
> >>>> From: Chen Pei <cp0613 at linux.alibaba.com>
> >>>>
> >>>> As the privileged specification states, after writing to the PMP CSRs,
> >>>> a SFENCE.VMA or HFENCE.GVMA instruction should be executed with rs1=x0
> >>>> and rs2=x0 to flush all address translation cache entries.
> >>>>
> >>>> The original implementation does not cover all possible cases. For
> >>>> example, the sbi_hart_map_saddr function calls pmp_set but does not
> >>>> execute the SFENCE.VMA instruction. This patch covers sbi_hart_map_saddr
> >>>> and sbi_hart_unmap_saddr, ensuring that dbtr, sse and other modules can
> >>>> safely update pmpcfg.
> >>>
> >>> The temporary mapping created using sbi_hart_map/unmap_saddr() is
> >>> only accessed by M-mode and does not impact the S-mode TLB
> >>> entries in anyway.
> >>
> >> That doesn't match my interpretation of this paragraph from the privileged ISA:
> >>
> >> Implementations with virtual memory are permitted to perform address
> >> translations speculatively and earlier than required by an explicit memory
> >> access, and are permitted to cache them in address translation cache
> >> structures—including possibly caching the identity mappings from effective
> >> address to physical address used in Bare translation modes and M-mode. The PMP
> >> settings for the resulting physical address may be checked (and possibly cached)
> >> at any point between the address translation and the explicit memory access.
> >> Hence, when the PMP settings are modified, M-mode software must synchronize the
> >> PMP settings with the virtual memory system and any PMP or address-translation
> >> caches. This is accomplished by executing an SFENCE.VMA instruction with rs1=x0
> >> and rs2=x0, after the PMP CSRs are written. See Section 19.5.3 for additional
> >> synchronization requirements when the hypervisor extension is implemented.
> >>
> >> Specifically, note "including possibly caching the identity mappings from
> >> effective address to physical address used in Bare translation modes and M-mode".
> >>
> >> So hardware is allowed to (speculatively) fill the TLB with entries that apply
> >> to M-mode, and tag those M-mode PMP entries with the permissions from the PMPs.
> >> The sfence.vma is required to clear those entries, or else accesses _even from
> >> M-mode_ may erroneously use the previous PMP permissions.
> >
> > If hardware is filling TLB entries for M-mode then it must tag these TLB
> > entries with privilege level so that they don't match for S-mode otherwise
> > a simple M-mode to S-mode switch at boot-time will allow bootloaders
> > to access M-mode memory with M-mode permissions.
>
> Yes, agreed, the TLB entries must be tagged with the privilege mode, but that's
> not the problem here. The problem has nothing at all to do with privilege mode
> transitions, and nothing to do with executing code in S-mode, either. The bug
> can be observed purely within the scope of the function calling
> sbi_hart_map_saddr().

The reserved PMP entry under consideration is disabled while in S-mode
before sbi_hart_map_saddr() and sbi_hart_unmap_saddr() so there is no
way the reserved PMP entry impacts S-mode. In other words, we are
not changing any PMP entry which impacts address ranges accessed by
S-mode.

>
> sbi_hart_map_saddr() rewrites a PMP range to allow M-mode access to some address
> range that it previously did not have permission to access. The problem is that
> this PMP change may not be observed until software executes a sfence.vma
> covering at least this address range. So without this patch, it is
> architecturally valid to receive an access fault when M-mode attempts to access
> this address range (i.e. on the very next line of code after calling
> sbi_hart_map_saddr()).

This is a fair point but we don't need to nuke the entire TLB. To address this,
sfence.vma to a particular address range is sufficient. Plus, hfence is not
required at all. I think this patch needs a complete re-write with
clear comments
for sfence.vma after PMP entry change in sbi_hart_map_saddr().

Regards,
Anup

>
> Regards,
> Samuel
>
> >>> Further, flushing TLB entries in sbi_hart_map/unmap_saddr() also
> >>> slows-down many SBI calls which use shared memory.
> >>>
> >>> NACK to this patch.
> >>>
> >>> Regards,
> >>> Anup
> >>>
> >>>>
> >>>> Signed-off-by: Chen Pei <cp0613 at linux.alibaba.com>
> >>>> ---
> >>>>  lib/sbi/sbi_hart.c | 54 ++++++++++++++++++++++++++++------------------
> >>>>  1 file changed, 33 insertions(+), 21 deletions(-)
> >>>>
> >>>> diff --git a/lib/sbi/sbi_hart.c b/lib/sbi/sbi_hart.c
> >>>> index 6a2d7d6..33bba38 100644
> >>>> --- a/lib/sbi/sbi_hart.c
> >>>> +++ b/lib/sbi/sbi_hart.c
> >>>> @@ -364,6 +364,30 @@ static unsigned int sbi_hart_get_smepmp_flags(struct sbi_scratch *scratch,
> >>>>         return pmp_flags;
> >>>>  }
> >>>>
> >>>> +static void pmp_flush(void)
> >>>> +{
> >>>> +       /*
> >>>> +        * As per section 3.7.2 of privileged specification v1.12,
> >>>> +        * virtual address translations can be speculatively performed
> >>>> +        * (even before actual access). These, along with PMP traslations,
> >>>> +        * can be cached. This can pose a problem with CPU hotplug
> >>>> +        * and non-retentive suspend scenario because PMP states are
> >>>> +        * not preserved.
> >>>> +        * It is advisable to flush the caching structures under such
> >>>> +        * conditions.
> >>>> +        */
> >>>> +       if (misa_extension('S')) {
> >>>> +               __asm__ __volatile__("sfence.vma");
> >>>> +
> >>>> +               /*
> >>>> +                * If hypervisor mode is supported, flush caching
> >>>> +                * structures in guest mode too.
> >>>> +                */
> >>>> +               if (misa_extension('H'))
> >>>> +                       __sbi_hfence_gvma_all();
> >>>> +       }
> >>>> +}
> >>>> +
> >>>>  static void sbi_hart_smepmp_set(struct sbi_scratch *scratch,
> >>>>                                 struct sbi_domain *dom,
> >>>>                                 struct sbi_domain_memregion *reg,
> >>>> @@ -543,18 +567,25 @@ int sbi_hart_map_saddr(unsigned long addr, unsigned long size)
> >>>>                              pmp_flags, base, order);
> >>>>         pmp_set(SBI_SMEPMP_RESV_ENTRY, pmp_flags, base, order);
> >>>>
> >>>> +       pmp_flush();
> >>>> +
> >>>>         return SBI_OK;
> >>>>  }
> >>>>
> >>>>  int sbi_hart_unmap_saddr(void)
> >>>>  {
> >>>> +       int rc;
> >>>>         struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
> >>>>
> >>>>         if (!sbi_hart_has_extension(scratch, SBI_HART_EXT_SMEPMP))
> >>>>                 return SBI_OK;
> >>>>
> >>>>         sbi_platform_pmp_disable(sbi_platform_ptr(scratch), SBI_SMEPMP_RESV_ENTRY);
> >>>> -       return pmp_disable(SBI_SMEPMP_RESV_ENTRY);
> >>>> +       rc = pmp_disable(SBI_SMEPMP_RESV_ENTRY);
> >>>> +
> >>>> +       pmp_flush();
> >>>> +
> >>>> +       return rc;
> >>>>  }
> >>>>
> >>>>  int sbi_hart_pmp_configure(struct sbi_scratch *scratch)
> >>>> @@ -578,26 +609,7 @@ int sbi_hart_pmp_configure(struct sbi_scratch *scratch)
> >>>>                 rc = sbi_hart_oldpmp_configure(scratch, pmp_count,
> >>>>                                                 pmp_log2gran, pmp_addr_max);
> >>>>
> >>>> -       /*
> >>>> -        * As per section 3.7.2 of privileged specification v1.12,
> >>>> -        * virtual address translations can be speculatively performed
> >>>> -        * (even before actual access). These, along with PMP traslations,
> >>>> -        * can be cached. This can pose a problem with CPU hotplug
> >>>> -        * and non-retentive suspend scenario because PMP states are
> >>>> -        * not preserved.
> >>>> -        * It is advisable to flush the caching structures under such
> >>>> -        * conditions.
> >>>> -        */
> >>>> -       if (misa_extension('S')) {
> >>>> -               __asm__ __volatile__("sfence.vma");
> >>>> -
> >>>> -               /*
> >>>> -                * If hypervisor mode is supported, flush caching
> >>>> -                * structures in guest mode too.
> >>>> -                */
> >>>> -               if (misa_extension('H'))
> >>>> -                       __sbi_hfence_gvma_all();
> >>>> -       }
> >>>> +       pmp_flush();
> >>>>
> >>>>         return rc;
> >>>>  }
> >>>> --
> >>>> 2.49.0
> >>>>
> >>>>
> >>>> --
> >>>> opensbi mailing list
> >>>> opensbi at lists.infradead.org
> >>>> http://lists.infradead.org/mailman/listinfo/opensbi
> >>>
> >>
>
>
> --
> opensbi mailing list
> opensbi at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/opensbi



More information about the opensbi mailing list