[RFC PATCH 1/2] spi: Add multiple CS support for a single SPI device

Mahapatra, Amit Kumar amit.kumar-mahapatra at amd.com
Thu Jun 23 04:39:19 PDT 2022


Hello Mark,

> -----Original Message-----
> From: linux-arm-kernel <linux-arm-kernel-bounces at lists.infradead.org> On
> Behalf Of Mark Brown
> Sent: Thursday, June 9, 2022 5:24 PM
> To: Amit Kumar Mahapatra <amit.kumar-mahapatra at xilinx.com>
> Cc: p.yadav at ti.com; miquel.raynal at bootlin.com; richard at nod.at;
> vigneshr at ti.com; git at xilinx.com; michal.simek at xilinx.com; linux-
> spi at vger.kernel.org; linux-arm-kernel at lists.infradead.org; linux-
> kernel at vger.kernel.org; michael at walle.cc; linux-mtd at lists.infradead.org
> Subject: Re: [RFC PATCH 1/2] spi: Add multiple CS support for a single SPI
> device
> 
> On Mon, Jun 06, 2022 at 04:56:06PM +0530, Amit Kumar Mahapatra wrote:
> 
> > ---
> >  drivers/spi/spi-zynqmp-gqspi.c | 30 ++++++++++++++++++++++++++----
> >  drivers/spi/spi.c              | 10 +++++++---
> >  include/linux/spi/spi.h        | 10 +++++++++-
> >  3 files changed, 42 insertions(+), 8 deletions(-)
> 
> Please split the core and driver support into separate patches, they are
> separate things.

Ok, I will split the patches.
> 
> > --- a/drivers/spi/spi.c
> > +++ b/drivers/spi/spi.c
> > @@ -2082,6 +2082,8 @@ static int of_spi_parse_dt(struct spi_controller
> > *ctlr, struct spi_device *spi,  {
> >  	u32 value;
> >  	int rc;
> > +	u32 cs[SPI_CS_CNT_MAX];
> > +	u8 idx;
> >
> >  	/* Mode (clock phase/polarity/etc.) */
> >  	if (of_property_read_bool(nc, "spi-cpha"))
> 
> This is changing the DT binding but doesn't have any updates to the binding
> document.  The binding code also doesn't validate that we don't have too
> many chip selects.

The following updates are done in the binding documents for adding multiple
CS support:
In jedec,spi-nor.yaml file " maxItems " of the "reg" DT property has been 
updated to accommodate two CS per SPI device.  
https://github.com/torvalds/linux/blob/de5c208d533a46a074eb46ea17f672cc005a7269/Documentation/devicetree/bindings/mtd/jedec%2Cspi-nor.yaml#L49

An example of a flash node with two CS has been added in spi-controller.yaml
https://github.com/torvalds/linux/blob/de5c208d533a46a074eb46ea17f672cc005a7269/Documentation/devicetree/bindings/spi/spi-controller.yaml#L141
> 
> > +	/* Bit mask of the chipselect(s) that the driver
> > +	 * need to use form the chipselect array.
> > +	 */
> > +	u8			cs_index_mask : 2;
> 
> Why make this a bitfield?

https://github.com/torvalds/linux/blob/de5c208d533a46a074eb46ea17f672cc005a7269/Documentation/devicetree/bindings/mtd/jedec%2Cspi-nor.yaml#L49

As per the DT bindings we are supporting max 2 chip selects per SPI device
that is the reason I had taken it as an bitfield of 2. But now I think that in 
future when the number of chip selects per device would increase i.e., 
more than 2, then we have to again increase the bitfield allocation for 
accommodating the increase in the number of chip selects per SPI device, 
So I think it's better to drop the bitfield for now and use cs_index_mask 
as an u8
> 
> I'm also not seeing anything here that checks that the driver supports
> multiple chip selects - it seems like something that's going to cause issues
> and we should probably have something to handle that situation.

In my approach the chip select member (chip_select) of the spi_device structure 
is changed to an array (chip_select[2]). This array is used to store the CS values 
coming from the "reg" DT property. 
In case of multiple chip selects  spi->chip_slect[0] will hold CS0 value & 
spi->chip_select[1] wil hold CS1 value.
In case of single chip select the spi->chip_select[0] will hold the chip select value.

As per the current implementation all the drivers fetch their chip select value form
spi->chip_select, but now the driver code needs to be modified to fetch the value
from spi->chip_select[0] instead and by this approach we do not need to check if the 
driver supports single or multiple CS.

Hope I addressed all your concerns and please let us know what you think.

Regards,
Amit



More information about the linux-mtd mailing list