jffs2_scan_eraseblock() errors - truly harmless?

David Woodhouse dwmw2 at infradead.org
Thu Sep 12 17:28:25 EDT 2002

fgiasson at mediatrix.com said:
> But since the error message does apparently not disappear easily, what
> about the space used by the node with the bad CRC?  I looked at the
> code and at mount time it is added to the dirty space, but it is not
> marked as obsolete on the medium...  thus the gc will not collect it.

The GC _will_ collect it. Marking stuff obsolete on the medium is just an 
optimisation to speed up the scan, what's important is whether we consider 
it obsolete in our in-memory records. Which we do -- when we happen to GC 
the block containing the node in question.

>  You also mentionned than in old JFFS2 code those nodes with bad CRC
> are marked as obsolete when discovered at mount time, but where is
> this happening?  I loooked at old code ( as per 2.4.19-pre7 ) and the
> jffs2_scan_eraseblock() function does not seem to obsolete the nodes
> obsolete itself. Where is this obsoleting done? 

Er, I can't see it. I could have sworn that we used to mark the offending
nodes obsolete on the medium when we saw the CRC fail, so we didn't check
the same CRC again and hence didn't whinge about it again. Then when the
accounting changed with the rotate_lists stuff I stopped it doing that
briefly, then changed it back again when I realised it was complaining about
the CRCs repeatedly. Maybe ask me again in the morning :)

> I am using CVS code as per July 3rd.  I know that more recent code
> should not complain with such messages at mount time, but unless
> absolutely necessary I'd rather not upgrade right now.  If I am right
> when I say that the GC will not collect the bad CRC node, how could I
> patch the code (without side effects!) to obsolete the bad crc nodes? 

It will GC them when it gets round to it. Leave it.


More information about the linux-mtd mailing list