[PATCH v2 3/3] mtd: rawnand: Support for sequential cache reads

Miquel Raynal miquel.raynal at bootlin.com
Wed Jul 19 04:44:45 PDT 2023


Hi Måns,

mans at mansr.com wrote on Wed, 19 Jul 2023 10:26:09 +0100:

> Miquel Raynal <miquel.raynal at bootlin.com> writes:
> 
> > Hi Måns,
> >
> > mans at mansr.com wrote on Tue, 18 Jul 2023 15:03:14 +0100:
> >  
> >> Miquel Raynal <miquel.raynal at bootlin.com> writes:
> >>   
> >> > Hi Måns,
> >> >
> >> > mans at mansr.com wrote on Mon, 17 Jul 2023 14:11:31 +0100:
> >> >    
> >> >> Miquel Raynal <miquel.raynal at bootlin.com> writes:
> >> >>     
> >> >> > So, I should have done that earlier but, could you please slow the
> >> >> > whole operation down, just to see if there is something wrong with the
> >> >> > timings or if we should look in another direction.
> >> >> >
> >> >> > Maybe you could add a boolean to flag if the last CMD was a
> >> >> > READCACHESEQ, READCACHESTART or READCACHEEND, and if the flag is
> >> >> > true, please get the jiffies before and after each waitrdy and
> >> >> > delay_ns. Finally, please print the expected delay and the actual one
> >> >> > and compare to see if something was too fast compared to what we
> >> >> > expected.      
> >> >> 
> >> >> Between which points exactly should the delay be measured?  Also, there
> >> >> is no command called READCACHESTART.  Did you mean READSTART or
> >> >> something else?    
> >> >
> >> > Yeah, whatever command is specific to sequential cache reads:
> >> > https://elixir.bootlin.com/linux/latest/source/drivers/mtd/nand/raw/nand_base.c#L1218
> >> > https://elixir.bootlin.com/linux/latest/source/drivers/mtd/nand/raw/nand_base.c#L1228    
> >> 
> >> I'm still not sure what exactly you want to me measure.  The waitrdy and
> >> ndelay combined, each separately, or something else?
> >>   
> >
> > I would like to know, how much time we spend waiting in both cases.  
> 
> Which "both" cases?

ndelay and more importantly, waitrdy:

--- a/drivers/mtd/nand/raw/omap2.c
+++ b/drivers/mtd/nand/raw/omap2.c
@@ -2111,6 +2111,7 @@ static int omap_nand_exec_instr(struct nand_chip *chip,
 
        switch (instr->type) {
        case NAND_OP_CMD_INSTR:
+               // trace the opcode
                iowrite8(instr->ctx.cmd.opcode,
                         info->reg.gpmc_nand_command);
                break;
@@ -2135,16 +2136,21 @@ static int omap_nand_exec_instr(struct nand_chip *chip,
                break;
 
        case NAND_OP_WAITRDY_INSTR:
+               // start waitrdy
                ret = info->ready_gpiod ?
                        nand_gpio_waitrdy(chip, info->ready_gpiod, instr->ctx.waitrdy.timeout_ms) :
                        nand_soft_waitrdy(chip, instr->ctx.waitrdy.timeout_ms);
+               // end
                if (ret)
                        return ret;
                break;
        }
 
-       if (instr->delay_ns)
+       if (instr->delay_ns) {
+               // start delay
                ndelay(instr->delay_ns);
+               // end
+       }
 
        return 0;
 }

> > Is there something wrong with the "wait ready"? As we cannot observe
> > the timings with a scope, because we are using a "soft" controller
> > implementation somehow, we can easily measure how much time we spend
> > in each operation by measuring the time before and after.
> >
> > These information are only useful when we are doing operations related
> > to sequential reads.  
> 
> I have hooked up some spare GPIOs to a scope, which should be more
> accurate (nanosecond) than software timestamps.  All I need to know is
> what to measure and what to look for in those measurements.

Great. The only issue with the scope is the fact that we might actually
look at something that is not a faulty sequential read op. Unless you
hack into the core to perform these in a loop (with a brutal "while
(1)"). But I don't think we require big precision here, at least as a
first step, looking at software timestamps like hinted above is enough
so we can easily identify the different delays and compare them with
nand_timings.c.

Please use whatever method is easier for you.

Thanks,
Miquèl



More information about the linux-mtd mailing list