[PATCH] lib: sbi: pmu: Rework SSE callbacks

Atish Patra atish.patra at linux.dev
Fri May 16 17:02:50 PDT 2025


On 5/16/25 8:21 AM, Clément Léger wrote:
> The S-mode PMU driver might need to enable/disable SSE event regularly
> in order to mask SSE events.This commits reworks the PMU SSE callback.
> SSE register/unregister now just sets the MIDELEG register and
> enable/disable now set/clear MIE. MIP clearing is also removed from
> pmu_sse_enable() since it could lead to losing pending interrupts.
> 

The changes looks good. However, can you please split into two commits ? 
One for register/unregister part and another for MIP change ?

The MIP clearing part is a critical change for any future SSE event 
similar to LCOFIP. Thus, I would prefer the history preserves why MIP 
was removed explicitly.


> Signed-off-by: Clément Léger <cleger at rivosinc.com>
> ---
>   lib/sbi/sbi_pmu.c | 29 +++++++++++++++++++----------
>   1 file changed, 19 insertions(+), 10 deletions(-)
> 
> diff --git a/lib/sbi/sbi_pmu.c b/lib/sbi/sbi_pmu.c
> index 5983a784..46c0e0fb 100644
> --- a/lib/sbi/sbi_pmu.c
> +++ b/lib/sbi/sbi_pmu.c
> @@ -1097,24 +1097,15 @@ void sbi_pmu_exit(struct sbi_scratch *scratch)
>   
>   static void pmu_sse_enable(uint32_t event_id)
>   {
> -	struct sbi_pmu_hart_state *phs = pmu_thishart_state_ptr();
> -	unsigned long irq_mask = sbi_pmu_irq_mask();
> -
> -	phs->sse_enabled = true;
> -	csr_clear(CSR_MIDELEG, irq_mask);
> -	csr_clear(CSR_MIP, irq_mask);
> -	csr_set(CSR_MIE, irq_mask);
> +	csr_set(CSR_MIE, sbi_pmu_irq_mask());
>   }
>   
>   static void pmu_sse_disable(uint32_t event_id)
>   {
> -	struct sbi_pmu_hart_state *phs = pmu_thishart_state_ptr();
>   	unsigned long irq_mask = sbi_pmu_irq_mask();
>   
>   	csr_clear(CSR_MIE, irq_mask);
>   	csr_clear(CSR_MIP, irq_mask);
> -	csr_set(CSR_MIDELEG, irq_mask);
> -	phs->sse_enabled = false;
>   }
>   
>   static void pmu_sse_complete(uint32_t event_id)
> @@ -1122,7 +1113,25 @@ static void pmu_sse_complete(uint32_t event_id)
>   	csr_set(CSR_MIE, sbi_pmu_irq_mask());
>   }
>   
> +static void pmu_sse_register(uint32_t event_id)
> +{
> +	struct sbi_pmu_hart_state *phs = pmu_thishart_state_ptr();
> +
> +	phs->sse_enabled = true;
> +	csr_clear(CSR_MIDELEG, sbi_pmu_irq_mask());
> +}
> +
> +static void pmu_sse_unregister(uint32_t event_id)
> +{
> +	struct sbi_pmu_hart_state *phs = pmu_thishart_state_ptr();
> +
> +	phs->sse_enabled = false;
> +	csr_set(CSR_MIDELEG, sbi_pmu_irq_mask());
> +}
> +
>   static const struct sbi_sse_cb_ops pmu_sse_cb_ops = {
> +	.register_cb = pmu_sse_register,
> +	.unregister_cb = pmu_sse_unregister,
>   	.enable_cb = pmu_sse_enable,
>   	.disable_cb = pmu_sse_disable,
>   	.complete_cb = pmu_sse_complete,




More information about the opensbi mailing list