[PATCH v5 00/21] nvmem: core: introduce NVMEM layouts

Miquel Raynal miquel.raynal at bootlin.com
Thu Jan 5 03:35:34 PST 2023


Hello,

alexander.stein at ew.tq-group.com wrote on Thu, 05 Jan 2023 12:04:52
+0100:

> Am Dienstag, 3. Januar 2023, 16:51:31 CET schrieb Srinivas Kandagatla:
> > Hi Miquel,
> > 
> > On 03/01/2023 15:39, Miquel Raynal wrote:  
> > > Hi Srinivas,
> > > 
> > > michael at walle.cc wrote on Tue,  6 Dec 2022 21:07:19 +0100:  
> > >> This is now the third attempt to fetch the MAC addresses from the VPD
> > >> for the Kontron sl28 boards. Previous discussions can be found here:
> > >> https://lore.kernel.org/lkml/20211228142549.1275412-1-michael@walle.cc/
> > >> 
> > >> 
> > >> NVMEM cells are typically added by board code or by the devicetree. But
> > >> as the cells get more complex, there is (valid) push back from the
> > >> devicetree maintainers to not put that handling in the devicetree.
> > >> 
> > >> Therefore, introduce NVMEM layouts. They operate on the NVMEM device and
> > >> can add cells during runtime. That way it is possible to add more complex
> > >> cells than it is possible right now with the offset/length/bits
> > >> description in the device tree. For example, you can have post processing
> > >> for individual cells (think of endian swapping, or ethernet offset
> > >> handling).
> > >> 
> > >> The imx-ocotp driver is the only user of the global post processing hook,
> > >> convert it to nvmem layouts and drop the global post pocessing hook.
> > >> 
> > >> For now, the layouts are selected by the device tree. But the idea is
> > >> that also board files or other drivers could set a layout. Although no
> > >> code for that exists yet.
> > >> 
> > >> Thanks to Miquel, the device tree bindings are already approved and
> > >> merged.
> > >> 
> > >> NVMEM layouts as modules?
> > >> While possible in principle, it doesn't make any sense because the NVMEM
> > >> core can't be compiled as a module. The layouts needs to be available at
> > >> probe time. (That is also the reason why they get registered with
> > >> subsys_initcall().) So if the NVMEM core would be a module, the layouts
> > >> could be modules, too.  
> > > 
> > > I believe this series still applies even though -rc1 (and -rc2) are out
> > > now, may we know if you consider merging it anytime soon or if there
> > > are still discrepancies in the implementation you would like to
> > > discuss? Otherwise I would really like to see this laying in -next a
> > > few weeks before being sent out to Linus, just in case.  
> > 
> > Thanks for the work!
> > 
> > Lets get some testing in -next.  
> 
> This causes the following errors on existing boards (imx8mq-tqma8mq-
> mba8mx.dtb):
> root at tqma8-common:~# uname -r
> 6.2.0-rc2-next-20230105
> 
> > OF: /soc at 0: could not get #nvmem-cell-cells for /soc at 0/bus at 30000000/  
> efuse at 30350000/soc-uid at 4
> > OF: /soc at 0/bus at 30800000/ethernet at 30be0000: could not get #nvmem-cell-cells   
> for /soc at 0/bus at 30000000/efuse at 30350000/mac-address at 90
> 
> These are caused because '#nvmem-cell-cells = <0>;' is not explicitly set in 
> DT.
> 
> > TI DP83867 30be0000.ethernet-1:0e: error -EINVAL: failed to get nvmem cell   
> io_impedance_ctrl
> > TI DP83867: probe of 30be0000.ethernet-1:0e failed with error -22  
> 
> These are caused because of_nvmem_cell_get() now returns -EINVAL instead of -
> ENODEV if the requested nvmem cell is not available.

Should we just assume #nvmem-cell-cells = <0> by default? I guess it's
a safe assumption.

Thanks,
Miquèl



More information about the linux-arm-kernel mailing list