[PATCH] mtd: rawnand: nandsim: Artificially prevent sequential page reads
Miquel Raynal
miquel.raynal at bootlin.com
Fri Mar 10 00:54:52 PST 2023
The continuous read support added recently makes nandsim
unhappy. Indeed, all the supported commands should be re-encoded into
internal commands, so of course there is currently no support for the
commands and patterns needed for continuous reads to work.
I tried to add support for them but nandsim (which is more a tool to
develop/debug upper layers rather than the raw NAND core) suffers from a
big limitation: it's internal parser needs to know what exact operation
is happening when the address cycles are performed. The research is then
sequential from the start up to the address cycles, but does not check
what's coming next even though the information is available. This is a
limitation which is related to the old API used by the core which kind
of forced the controllers to guess what operation was being performed
rather early. Today the core uses a more transparent API called
->exec_op() which no longer requires controller drivers to do any more
guessing, but despite being updated to ->exec_op(), nandsim is still a
bit constrained on this regard and thus cannot handle sequential page
reads because the start sequence beginning is identical to a regular
page read.
If the internal algorithm is updated some day, it should be possible to
make it support sequential page reads by adding something like:
/* Large page devices continuous read page start */
{OPT_LARGEPAGE, {STATE_CMD_READ0, STATE_ADDR_PAGE, STATE_CMD_READSTART,
STATE_CMD_READCACHESEQ | ACTION_CPY, STATE_DATAOUT,
STATE_READY}},
/* Large page devices continuous read page continue */
{OPT_LARGEPAGE, {STATE_CMD_READCACHESEQ | ACTION_CPY_NEXT, STATE_DATAOUT,
STATE_READY}},
/* Large page devices continuous read page end */
{OPT_LARGEPAGE, {STATE_CMD_READCACHEEND | ACTION_CPY_NEXT, STATE_DATAOUT,
STATE_READY}},
For now, we just return -EOPNOTSUPP when the core asks controller
drivers if they support the feature in order to prevent any further use
of these opcodes.
Note: This is a hack, ->exec_op() is not supposed to check against the
COMMAND opcodes unless _really_ needed.
Fixes: 003fe4b9545b ("mtd: rawnand: Support for sequential cache reads")
Reported-by: Zhihao Cheng <chengzhihao1 at huawei.com>
Link: https://lore.kernel.org/linux-mtd/fd34fe55-7f4a-030d-8653-9bb9cf08410d@huawei.com/
Signed-off-by: Miquel Raynal <miquel.raynal at bootlin.com>
---
drivers/mtd/nand/raw/nandsim.c | 17 ++++++++++++++++-
1 file changed, 16 insertions(+), 1 deletion(-)
diff --git a/drivers/mtd/nand/raw/nandsim.c b/drivers/mtd/nand/raw/nandsim.c
index c21abf748948..179b28459b4b 100644
--- a/drivers/mtd/nand/raw/nandsim.c
+++ b/drivers/mtd/nand/raw/nandsim.c
@@ -2160,8 +2160,23 @@ static int ns_exec_op(struct nand_chip *chip, const struct nand_operation *op,
const struct nand_op_instr *instr = NULL;
struct nandsim *ns = nand_get_controller_data(chip);
- if (check_only)
+ if (check_only) {
+ /* The current implementation of nandsim needs to know the
+ * ongoing operation when performing the address cycles. This
+ * means it cannot make the difference between a regular read
+ * and a continuous read. Hence, this hack to manually refuse
+ * supporting sequential cached operations.
+ */
+ for (op_id = 0; op_id < op->ninstrs; op_id++) {
+ instr = &op->instrs[op_id];
+ if (instr->type == NAND_OP_CMD_INSTR &&
+ (instr->ctx.cmd.opcode == NAND_CMD_READCACHEEND ||
+ instr->ctx.cmd.opcode == NAND_CMD_READCACHESEQ))
+ return -EOPNOTSUPP;
+ }
+
return 0;
+ }
ns->lines.ce = 1;
--
2.34.1
More information about the linux-mtd
mailing list