[PATCH v1 06/14] nvmem: core: introduce NVMEM layouts
Srinivas Kandagatla
srinivas.kandagatla at linaro.org
Tue Aug 30 08:23:22 PDT 2022
On 30/08/2022 16:02, Michael Walle wrote:
> Am 2022-08-30 16:43, schrieb Srinivas Kandagatla:
>
>>>>> diff --git a/drivers/nvmem/layouts/Makefile
>>>>> b/drivers/nvmem/layouts/Makefile
>>>>> new file mode 100644
>>>>> index 000000000000..6fdb3c60a4fa
>>>>> --- /dev/null
>>>>> +++ b/drivers/nvmem/layouts/Makefile
>>>>> @@ -0,0 +1,4 @@
>>>>> +# SPDX-License-Identifier: GPL-2.0
>>>>> +#
>>>>> +# Makefile for nvmem layouts.
>>>>> +#
>>>>> diff --git a/include/linux/nvmem-provider.h
>>>>> b/include/linux/nvmem-provider.h
>>>>> index e710404959e7..323685841e9f 100644
>>>>> --- a/include/linux/nvmem-provider.h
>>>>> +++ b/include/linux/nvmem-provider.h
>>>>> @@ -127,6 +127,28 @@ struct nvmem_cell_table {
>>>>> struct list_head node;
>>>>> };
>>>>> +/**
>>>>> + * struct nvmem_layout - NVMEM layout definitions
>>>>> + *
>>>>> + * @name: Layout name.
>>>>> + * @of_match_table: Open firmware match table.
>>>>> + * @add_cells: Will be called if a nvmem device is found which
>>>>> + * has this layout. The function will add layout
>>>>> + * specific cells with nvmem_add_one_cell().
>>>>> + * @node: List node.
>>>>> + *
>>>>> + * A nvmem device can hold a well defined structure which can just be
>>>>> + * evaluated during runtime. For example a TLV list, or a list of
>>>>> "name=val"
>>>>> + * pairs. A nvmem layout can parse the nvmem device and add
>>>>> appropriate
>>>>> + * cells.
>>>>> + */
>>>>> +struct nvmem_layout {
>>>>> + const char *name;
>>>>> + const struct of_device_id *of_match_table;
>>>>
>>>> looking at this, I think its doable to convert the existing
>>>> cell_post_process callback to layouts by adding a layout specific
>>>> callback here.
>>>
>>> can you elaborate on that?
>>
>> If we relax add_cells + add nvmem_unregister_layout() and update
>> struct nvmem_layout to include post_process callback like
>>
>> struct nvmem_layout {
>> const char *name;
>> const struct of_device_id *of_match_table;
>> int (*add_cells)(struct nvmem_device *nvmem, struct nvmem_layout
>> *layout);
>> struct list_head node;
>> /* default callback for every cell */
>> nvmem_cell_post_process_t post_process;
>> };
>>
>> then we can move imx-ocotp to this layout style without add_cell
>> callback, and finally get rid of cell_process_callback from both
>> nvmem_config and nvmem_device.
>>
>> If layout specific post_process callback is available and cell does
>> not have a callback set then we can can be either updated cell
>> post_process callback with this one or invoke layout specific callback
>> directly.
>>
>> does that make sense?
>
> Yes I get what you mean. BUT I'm not so sure; it mixes different
> things together. Layouts will add cells, analogue to
> nvmem_add_cells_from_of() or nvmem_add_cells_from_table(). With
> the hook above, the layout mechanism is abused to add post
> processing to cells added by other means.
We are still defining what layout exactly mean w.r.t to nvmem :-)
>
There are two aspects to this as nvmem core is concerned
1> parse and add cells based on some provider specific algo/stucture.
2> post process cell data before user can see it.
In some cases we need 1 and 2 while in other cases we just need 1 or 2.
Having an unified interface would help with maintenance and removing
duplication.
> What is then the difference to the driver having that "global"
> post process hook?
w.r.t post processing there should be no difference.
cell can have no post-processing or a default post processing or a
specific one depending on its configuration.
>
> The correct place to add the per-cell hook in this case would be
> nvmem_add_cells_from_of().
yes, that is the place where it should go. we have to work on the
details but if provider is associated with a layout then this should be
doable.
--srini
>
> -michael
More information about the linux-mtd
mailing list