dangerous NAND_BBT_SCANBYTE1AND6
Brian Norris
computersforpeace at gmail.com
Fri Apr 22 00:50:36 EDT 2011
Hi Ivan,
(FYI, please use my @gmail.com, not my @broadcom address.)
I can't say I know everything about the intentions and history of all
statements in various NAND flash data sheets, but I have read many of
them and will try to explain my view. Of course, I may be wrong.
On 4/21/2011 10:10 AM, Ivan Djelic wrote:
> Old small page nand devices used to have their bad block marker in 6th byte of
> the spare area of the first page.
Correct
> ST datasheet says that factory bad blocks will have _both_ bytes cleared
> (1st and 6th); I guess this was done to allow choosing which marker to check
> (but I may be wrong). Maybe to be compatible with large page marker location
> scheme (again, just guessing).
The actual statement is one of these two (pulled from various ST and
Numonyx sheets):
"Any block, where the 1st and 6th bytes or the 1st word in the spare
area of the 1st page, does not contain FFh, is a bad block."
"Any block, where the 1st and 6th bytes, in the spare area of the first
page, does not contain FFh is a bad block."
Strictly speaking, neither of these "sentences" uses correct grammar, as
the commas are placed arbitrarily. Most importantly, though, I don't
think they make clear the following:
1) Does the manufacturer guarantee that BOTH bytes are non-FFh?
2) Does the manufacturer guarantee that the combined bytes ("1st and
6th") contain a non-FFh byte?
I understood it as the latter, and so decided the scan needed both bytes
(perhaps one byte was written successfully but not the other). However,
your argument for choice (1) ("this was done to allow choosing which
marker to check") makes just as much sense (or more) to me.
In trying to decide why I came to conclude choice (2) and not (1), I
recall that some Hynix and Samsung parts explicitly declare that the
first OR second page may be used, in case the first page is bad. I may
have subconsciously applied this 1st/2nd page concept to the 1st/6th
bytes logic.
> My understanding of bad block markers is (please correct me if I am wrong):
> small page => check 6th byte of spare area of first page
> large page, non-ONFI => check first word of spare area of first page
> ONFI => see ONFI spec
Unfortunately, small page, large page, and ONFI are 3 classifications
that oversimplify bad block markers.
Some people (especially Samsung and Hynix, but even some Micron) got
creative. Some of their chips use:
1st or 2nd page
the last page
the 1st or last page
the last or (last - 2)th page
And of course, there's the controversial 1st/6th byte usage - that I'm
still not clear on. Some of these scanning patterns are rare, but they
do exist.
Sorry for any confusion, but I guess it's better late than never for
this sort of discussion...
Brian
More information about the linux-mtd
mailing list