[PATCH v3 0/6] UBI: add max_beb_per1024 parameter / ioctl
Richard Genoud
richard.genoud at gmail.com
Fri Aug 31 10:46:27 EDT 2012
2012/8/23 Richard Genoud <richard.genoud at gmail.com>:
> I could do some more tests with the final version.
I did some test:
- new ubiattacht/detach with old kernel
- old ubiattacht/detach/mkvol/format/ ubifs mount with new kernel
- new ubi* with new kernel.
- command line ubi.mtd=...
I ran into the case where I asked for 20 PEB reserved for bad PEB handling:
UBI warning: print_rsvd_warning: cannot reserve enough PEBs for bad
PEB handling, reserved 10, need 20
and I did a ubirmvol on a small volume:
# ubirmvol /dev/ubi0 -N toto2
UBI: reserved more 10 PEBs for bad PEB handling
=> that's great, no need to detach and then re-attach
I tested also some values at limits, it seems all right.
BUT, I ran into a bug, I don't know if it's my kernel (as I've got
quite a lot of patches ahead of 3.6 to get my board run), but it's
kind of nasty.
It's not related to this patch serie, I could trigger it with a
3.6-rc1 kernel (and my board-patches).
I get some memory crashing when I do :
flash_erase /dev/mtd2 0 160 (it's a whole mtd partition)
ubiattach -m 2
After that, I can have a OOPS right away, but sometimes not.
but if I run someting (top for exemple) I get an oops then.
So, the memory is beeing corrupted somewhere, but I couldn't find out
exactly where.
But I found that if I comment out the lines :
diff --git a/drivers/mtd/ubi/vtbl.c b/drivers/mtd/ubi/vtbl.c
--- a/drivers/mtd/ubi/vtbl.c
+++ b/drivers/mtd/ubi/vtbl.c
@@ -346,8 +346,8 @@ retry:
*/
err = ubi_scan_add_used(ubi, si, new_seb->pnum, new_seb->ec,
vid_hdr, 0);
+// kfree(new_seb);
+// ubi_free_vid_hdr(ubi, vid_hdr);
return err;
write_error:
I can't trigger an oops any more.
As I can't test with older kernel, I reverted some commits that
touched drivers/mtd/ubi
I reverted all commits from 3.6-rc1 to
4415626732defb5a4567a0a757c7c5baae7ca846 (UBI: amend commentaries WRT
dtype)
and I still can trigger the oops.
Does someone can reproduce that ?
Best regards,
Richard
More information about the linux-mtd
mailing list