Bad Blocks On JFFS2/NAND

Michael Moedt xemc at yahoo.com
Tue Oct 19 17:45:18 EDT 2004


--- Thomas Gleixner <tglx at linutronix.de> wrote:
> On Tue, 2004-10-19 at 13:22, Simon Haynes wrote:
> > I have experienced a problem in which a JFF2 filesystem on NAND
> > became full. 
> > This is a root file system and constant writes to a logfile
> > filled the filesystem. On investigation it was found that the
> > NAND device now had hundreds of bad blocks.
> > 
> > I started to investigate this and found that JFFS2 was announcing
> > 
> > Newly-erased block contained word 0x1985e002 at offset 0x020f7e00
> > 
> > Messages which result in my mtd/jffs2 code marking the block bad.
> > What I find strange is that a subsequent scan list the new block
> > at a different 16k offset when the device erasesize is 16k, in
> > this case 0x020f0000.
> > Is that because my device is 128Mb and JFFS2 is using this
> > 'virtual erase size' of 32k ?
> 
> Yes. The bad block code scans/marks physical blocks and JFFS2
> operates
> on virtual ones, if the device size is big enough.
> 
> > I have observed this now on several different NAND devices and it
> > seems to be more prominent while performing small writes.
> >
> > I am currently trying to work out if the erase is not completing,
> > or this is the wrong block or something else. 
> 
> Hmm, are you using Ready/Busy Pin or the timeout ?
> 
> tglx
> 

Hi guys.  This topic has given me a little bit of concern.  Could you
try to answer a few questions for me?

1. Do you know what usually causes the "Newly-erased block contained
word   "... error?
Is it caused by a interrupted (or otherwise failed) erase?  Would
power-fail cause this?

2. Would this cause good blocks to be incorrectly [and permanently]
marked as bad?


I think I may have seen something similar on my system.  I'm
considering writing a test to see if this is a problem for me, but
I'd like to learn more about this also.

Thanks,
Mike





More information about the linux-mtd mailing list