[RFC PATCH 6/6] block: implement NVMEM provider
Greg Kroah-Hartman
gregkh at linuxfoundation.org
Fri Jul 21 04:11:40 PDT 2023
On Fri, Jul 21, 2023 at 11:40:51AM +0100, Daniel Golle wrote:
> On Thu, Jul 20, 2023 at 11:31:06PM -0700, Christoph Hellwig wrote:
> > On Thu, Jul 20, 2023 at 05:02:32PM +0100, Daniel Golle wrote:
> > > On Thu, Jul 20, 2023 at 12:04:43AM -0700, Christoph Hellwig wrote:
> > > > The layering here is exactly the wrong way around. This block device
> > > > as nvmem provide has not business sitting in the block layer and being
> > > > keyed ff the gendisk registration. Instead you should create a new
> > > > nvmem backed that opens the block device as needed if it fits your
> > > > OF description without any changes to the core block layer.
> > > >
> > >
> > > Ok. I will use a class_interface instead.
> >
> > I'm not sure a class_interface makes much sense here. Why does the
> > block layer even need to know about you using a device a nvmem provider?
>
> It doesn't. But it has to notify the nvmem providing driver about the
> addition of new block devices. This is what I'm using class_interface
> for, simply to hook into .add_dev of the block_class.
Why is this single type of block device special to require this, yet all
others do not? Encoding this into the block layer feels like a huge
layering violation to me, why not do it how all other block drivers do
it instead?
> > As far as I can tell your provider should layer entirely above the
> > block layer and not have to be integrated with it.
>
> My approach using class_interface doesn't require any changes to be
> made to existing block code. However, it does use block_class. If
> you see any other good option to implement matching off and usage of
> block devices by in-kernel users, please let me know.
Do not use block_class, again, that should only be for the block core to
touch. Individual block drivers should never be poking around in it.
thanks,
greg k-h
More information about the linux-mtd
mailing list