[RFC PATCH v6 1/6] riscv: mm: dma-noncoherent: Switch using function pointers for cache management

Arnd Bergmann arnd at arndb.de
Fri Jan 6 14:31:33 PST 2023


On Fri, Jan 6, 2023, at 19:55, Prabhakar wrote:
> From: Lad Prabhakar <prabhakar.mahadev-lad.rj at bp.renesas.com>
>
> The current implementation of CMO was handled using the ALTERNATIVE_X()
> macro; this was manageable when there were a limited number of platforms
> using this. Now that we are having more and more platforms coming through
> with the CMO the use of the ALTERNATIVE_X() macro becomes unmanageable.
>
> To avoid such issues this patch switches to use of function pointers
> instead of ALTERNATIVE_X() macro for cache management (the only draw being
> performance over the previous approach).
>
> void (*clean_range)(unsigned long addr, unsigned long size);
> void (*inv_range)(unsigned long addr, unsigned long size);
> void (*flush_range)(unsigned long addr, unsigned long size);
>
> The above function pointers are provided to be overridden where platforms
> using standard approach and for platforms who want handle the operation
> based on the operation the below function pointer is provided:
>
> void (*riscv_dma_noncoherent_cmo_ops)(void *vaddr, size_t size,
>                                       enum dma_data_direction dir,
>                                       enum dma_noncoherent_ops ops);
>
> In the current patch we have moved the ZICBOM and T-Head CMO to use
> function pointers.
>
> Signed-off-by: Lad Prabhakar <prabhakar.mahadev-lad.rj at bp.renesas.com>

This looks like a nice improvement! I have a few suggestions
for improvements, but no objections here.

> diff --git a/arch/riscv/errata/thead/errata.c b/arch/riscv/errata/thead/errata.c
> index fac5742d1c1e..826b2ba3e61e 100644
> --- a/arch/riscv/errata/thead/errata.c
> +++ b/arch/riscv/errata/thead/errata.c
...
> @@ -44,6 +106,15 @@ static bool errata_probe_cmo(unsigned int stage,
> 
>  	riscv_cbom_block_size = L1_CACHE_BYTES;
>  	riscv_noncoherent_supported();
> +
> +	memset(&thead_cmo_ops, 0x0, sizeof(thead_cmo_ops));
> +	if (IS_ENABLED(CONFIG_ERRATA_THEAD_CMO)) {
> +		thead_cmo_ops.clean_range = &thead_cmo_clean_range;
> +		thead_cmo_ops.inv_range = &thead_cmo_inval_range;
> +		thead_cmo_ops.flush_range = &thead_cmo_flush_range;
> +		riscv_noncoherent_register_cache_ops(&thead_cmo_ops);
> +	}

The implementation here looks reasonable, just wonder whether
the classification as an 'errata' makes sense. I would probably
consider this a 'driver' at this point, but that's just
a question of personal preference.

For the operations structure, I think a 'static const struct
riscv_cache_ops' is more intuitive than assigning the
members individually.
> +
> +enum dma_noncoherent_ops {
> +	NON_COHERENT_SYNC_DMA_FOR_DEVICE = 0,
> +	NON_COHERENT_SYNC_DMA_FOR_CPU,
> +	NON_COHERENT_DMA_PREP,
> +	NON_COHERENT_DMA_PMEM,
> +};
> +
> +/*
> + * struct riscv_cache_ops - Structure for CMO function pointers
> + * @clean_range: Function pointer for clean cache
> + * @inv_range: Function pointer for invalidate cache
> + * @flush_range: Function pointer for flushing the cache
> + * @riscv_dma_noncoherent_cmo_ops: Function pointer for platforms who 
> want
> + *  to handle CMO themselves. If this function pointer is set rest of 
> the
> + *  function pointers will be NULL.
> + */
> +struct riscv_cache_ops {
> +	void (*clean_range)(unsigned long addr, unsigned long size);
> +	void (*inv_range)(unsigned long addr, unsigned long size);
> +	void (*flush_range)(unsigned long addr, unsigned long size);
> +	void (*riscv_dma_noncoherent_cmo_ops)(void *vaddr, size_t size,
> +					      enum dma_data_direction dir,
> +					      enum dma_noncoherent_ops ops);
> +};

I don't quite see how the fourth operation is used here.
Are there cache controllers that need something beyond
clean/inv/flush?

> +
> +#else
> +
> +static void riscv_noncoherent_register_cache_ops(struct 
> riscv_cache_ops *ops) {}
> +
> +static inline void riscv_dma_noncoherent_clean(void *vaddr, size_t 
> size) {}
> +
> +static inline void riscv_dma_noncoherent_flush(void *vaddr, size_t 
> size) {}
> +
> +static inline void riscv_dma_noncoherent_inval(void *vaddr, size_t 
> size) {}

I think you can drop the #else path here: if there is no
noncoherent DMA, then nothing should ever call these
functions, right?

> +
> +#ifdef CONFIG_RISCV_ISA_ZICBOM
...
> +struct riscv_cache_ops zicbom_cmo_ops = {
> +	.clean_range = &zicbom_cmo_clean_range,
> +	.inv_range = &zicbom_cmo_inval_range,
> +	.flush_range = &zicbom_cmo_flush_range,
> +};
> +#else
> +struct riscv_cache_ops zicbom_cmo_ops = {
> +	.clean_range = NULL,
> +	.inv_range = NULL,
> +	.flush_range = NULL,
> +	.riscv_dma_noncoherent_cmo_ops = NULL,
> +};
> +#endif
> +EXPORT_SYMBOL(zicbom_cmo_ops);

Same here: If the ZICBOM ISA is disabled, nothing should
reference zicbom_cmo_ops. Also, since ZICBOM is a standard
extension, I think it makes sense to always have it enabled,
at least whenever noncoherent DMA is supported, that way
it can be the default that gets used in the absence of any
nonstandard cache controller.

   Arnd



More information about the linux-riscv mailing list