[PATCH v4 2/3] spi: dt-bindings: Describe stacked/parallel memories modes
Miquel Raynal
miquel.raynal at bootlin.com
Fri Dec 17 04:58:00 PST 2021
Hello Pratyush,
p.yadav at ti.com wrote on Fri, 17 Dec 2021 18:09:10 +0530:
> On 16/12/21 05:25PM, Miquel Raynal wrote:
> > Hello Pratyush,
> >
> > p.yadav at ti.com wrote on Wed, 15 Dec 2021 01:14:33 +0530:
> >
> > > Hi Miquel,
> > >
> > > On 10/12/21 09:10PM, Miquel Raynal wrote:
> > > > Describe two new memories modes:
> > > > - A stacked mode when the bus is common but the address space extended
> > > > with an additinals wires.
> > > > - A parallel mode with parallel busses accessing parallel flashes where
> > > > the data is spread.
> > > >
> > > > Signed-off-by: Miquel Raynal <miquel.raynal at bootlin.com>
> > > > ---
> > > > .../bindings/spi/spi-peripheral-props.yaml | 29 +++++++++++++++++++
> > > > 1 file changed, 29 insertions(+)
> > > >
> > > > diff --git a/Documentation/devicetree/bindings/spi/spi-peripheral-props.yaml b/Documentation/devicetree/bindings/spi/spi-peripheral-props.yaml
> > > > index 5dd209206e88..4194fee8f556 100644
> > > > --- a/Documentation/devicetree/bindings/spi/spi-peripheral-props.yaml
> > > > +++ b/Documentation/devicetree/bindings/spi/spi-peripheral-props.yaml
> > > > @@ -82,6 +82,35 @@ properties:
> > > > description:
> > > > Delay, in microseconds, after a write transfer.
> > > >
> > > > + stacked-memories:
> > > > + $ref: /schemas/types.yaml#/definitions/uint64-matrix
> > >
> > > Why matrix? Can't you use array here? Sorry, I am not much familiar with
> > > JSON schema.
> >
> > Yes, Rob also opened the discussion, I've answered there on this point,
> > but I agree, this should be define as an array, but the current tooling
> > refused to accept what I wanted from a dt-writing point of view. More
> > on that on the dedicated thread.
> >
> > > > + description: Several SPI memories can be wired in stacked mode.
> > > > + This basically means that either a device features several chip
> > > > + selects, or that different devices must be seen as a single
> > > > + bigger chip. This basically doubles (or more) the total address
> > > > + space with only a single additional wire, while still needing
> > > > + to repeat the commands when crossing a chip boundary. The size of
> > > > + each chip should be provided as members of the array.
> > > > + minItems: 2
> > > > + maxItems: 2
> > > > + items:
> > > > + maxItems: 1
> > >
> > > Thanks. This looks better to me.
> > >
> > > But before we go ahead, I think there has been some confusion around
> > > what exactly your patches intend to support. Let's clear them up first.
> > > What type of setup do you want to support?
> >
> > Before I try to clarify your questions below, the setup that I have is:
> >
> > Xilinx ZynqMP QSPI controller + 2 flashes.
> >
> > What I want to describe is the specific handling of what the Xilinx
> > QSPI controller is able to do. This controller has two modes when wired
> > to more than one flash:
> > - stacked
> > - parallel
> >
> > I have not entered the documentation nor the code in details yet, but I
> > believe that handling two flashes in the stacked configuration, besides
> > a couple of possible optimizations that are possible by the hardware,
> > is close to what any controller would do. Maybe there is one difference
> > though, when in the "stacked" mode, this controller treats the two
> > flashes as a single one, so that is why, if we want to support this
> > "advanced" mode, we *need* a way to know that this mode should be used
> > because the controller expects a wider range of addresses.
>
> Ok.
>
> >
> > About the parallel configuration, there is absolutely no doubt that we
> > have no other choice than describing how the data is spread across two
> > devices. We don't really care about the manner, but the controller
> > needs to know how to assert the two CS internally so this definitely
> > something that needs to be described.
>
> Yes. And with device sizes as part of the property we should be able to
> do this.
>
> >
> > Now the question you might ask is, why do we define these properties as
> > flash properties then? And this is a real good question, I think both
> > actually work as long as we consider that we can only have either a
> > spi-memory or any other type of device on a single bus, but not both at
> > the same time. In v1 I proposed a controller property. Mark proposed to
> > switch for a device property which I did in v2 onward.
>
> I also think making it a device property makes more sense. You are
> describing how the flashes are laid out, which is independent of the
> controller. With the same SoC, I could have one board with a single
> flash and another with a dual-die flash.
>
> >
> > > 1. One single flash but with multiple dies, with each die sitting on a
> > > different CS.
> >
> > Yes.
> >
> > > 2. Two (or more) identical but independent flash memories to be
> > > treated as one.
> >
> > Yes.
> >
> > > 3. Two (or more) different and independent flash memories to be
> > > treated as one.
> >
> > I don't know. My first proposal excluded these, but I believe we can
> > handle them with the change your proposed (the device size as part of
> > the property).
> >
> > > In our earlier exchanges you said you want to support 2. And when I
> > > wanted you to account for 3 as well you said we should use mtdconcat for
> > > that.
> >
> > Not that we should, but that we could because from a core perspective
> > it's way more challenging than supporting identical devices. But the
> > conversation drifted about the software backend that we should use
> > rather than on the actual description, because mtdconcat is not a
> > feature which benefits from any kind of description yet, so even if we
> > use mtdconcat as backed, we will need some kind of description here as
> > well. So, as I told previously, I am fine considering these setups
> > in my proposal, that's why I updated my proposal to include the
> > devices size, even though that is way out of scope compared to my
> > initial target.
>
> Right, and I appreciate that :-)
>
> >
> > But here we are talking about driver code, which has nothing to do with
> > the bindings. If we focus on the bindings, I believe the solution with
> > the sizes covers all cases.
>
> I think it is important to discuss _how_ we would implement this in the
> driver because this could help identify flaws or shortcomings in the
> bindings. In this case, I agree with you that the solution with sizes
> should cover all 3 cases and we now have lot more flexibility in how we
> design the driver.
>
> >
> > > So my question is, why can't we use mtdconcat for 2 as well, since
> > > it is just a special case of 3? And if we are using mtdconcat, then why
> > > do we need this at all? Shouldn't you then choose the chip at MTD layer
> > > and use the respective SPI device to get the CS value, which would make
> > > this property useless?
> >
> > Reason 1:
> > As depicted above, while the stacked mode might more or less
> > fit the mtdconcat idea, it is absolutely not the case for the parallel
> > mode.
>
> Right. My question was mostly about the stacked mode.
>
> >
> > Reason 2:
> > mtdconcat is a software backend. Here we are discussing bindings.
> > No matter what backed we finally pick, there will be the need for a
> > proper description and so far there has been no binding for mtdconcat
> > (even though I tried to push in favor of it a while ago without
> > success). So no matter what software solution we we adopt, we
> > *will* need an upstream binding description at some point. But
> > mtdconcat really means "there are two devices we want to consider as a
> > single". While here the point is: we have two devices that get
> > abstracted by the controller, and we still must describe that.
> >
> > > I can see this making sense for case 1. For that case you said you don't
> > > have an existing datasheet or device to propose. And if there is no real
> > > device doing it I see little point in figuring out a binding for it.
> >
> > Yes, because somewhat we focused the debate over the devices, while I
> > was initially talking about a controller abstraction. There is (at
> > least) one controller doing such abstractions, the Xilinx ZynqMP
> > generic QSPI controller, the spec is public (chapter 24):
> > https://www.xilinx.com/support/documentation/user_guides/ug1085-zynq-ultrascale-trm.pdf
>
> I think this _is_ a device problem first. The controller part should be
> easy to do once we solve the device problem. Once we clearly describe
> how the devices are laid out it should be simple for the controller side
> to figure out what to do.
>
> >
> > I hope all this clarifies the situation!
>
> Thanks. I am quite happy with this patch. I'll drop my R-by once you
> figure out what type to use with these properties in v5.
Great! Thank you very much for this discussion, it forced me to clarify
things in my head as well :-)
Thanks,
Miquèl
More information about the linux-mtd
mailing list