NAND error retry does not work,
ahgu
ahgu at ahgu.homeunix.com
Fri Aug 26 21:21:47 EDT 2005
In case of page write failure(write.c), it calls jffs2_reserve_space(c,
sizeof(*ri) + datalen, &flash_ofs, &dummy, alloc_mode);
to get a new block (new flash_ofs), but it always return the same address.
My understanding of the logic is as follows:
In case of write failure, it will put the whole block in the unused queue
and get a new block. Not just a new page on the block? And copy the content
of the bad block to the new one?
This is the latest jffs2 version. Is that a bug?
-Andrew
if (!retried && alloc_mode != ALLOC_NORETRY && (raw =
> jffs2_alloc_raw_node_ref())) {
> /* Try to reallocate space and retry */
> uint32_t dummy;
> struct jffs2_eraseblock *jeb = &c->blocks[flash_ofs / c->sector_size];
>
> retried = 1;
>
> D1(printk(KERN_DEBUG "Retrying failed write.\n"));
>
> jffs2_dbg_acct_sanity_check(c,jeb);
> jffs2_dbg_acct_paranoia_check(c, jeb);
>
> if (alloc_mode == ALLOC_GC) {
> ret = jffs2_reserve_space_gc(c, sizeof(*ri) + datalen, &flash_ofs,
> &dummy);
> } else {
> /* Locking pain */
> up(&f->sem);
> jffs2_complete_reservation(c);
>
> ret = jffs2_reserve_space(c, sizeof(*ri) + datalen, &flash_ofs, &dummy,
> alloc_mode);
> down(&f->sem);
> }
>
> if (!ret) {
> D1(printk(KERN_DEBUG "Allocated space at 0x%08x to retry failed
> write.\n", flash_ofs));
>
> jffs2_dbg_acct_sanity_check(c,jeb);
> jffs2_dbg_acct_paranoia_check(c, jeb);
>
> goto retry;
> }
> D1(printk(KERN_DEBUG "Failed to allocate space to retry failed write:
> %d!\n", ret));
> jffs2_free_raw_node_ref(raw);
> }
> /* Release the full_dnode which is now useless, and return */
> jffs2_free_full_dnode(fn);
> return ERR_PTR(ret?ret:-EIO);
> }
More information about the linux-mtd
mailing list