Can't find correct configuration

Zhihao Cheng chengzhihao1 at huawei.com
Tue Sep 5 19:49:47 PDT 2023


在 2023/9/6 10:42, Zhihao Cheng 写道:
> 在 2023/9/5 22:45, Leon Pollak 写道:
> Hi
>> Hello, all.
>> I will be very very thankful for the help to find the correct
>> configuration for my NAND.
>> I passed through a lot of mails in the lists and google and ChatGPT -
>> no success.
>> I have 256MB NAND and 248MB partition for UBIFS. I run:
>> mkfs.ubifs -v -r rootfs -F -m 2048 -e 126976 -c 1981 -o ./ubifs.img
>> ubinize -v -o rootfs.img -m 2048 -p 128KiB -s 512 -O 2048 ubinize.cfg
>> -----
>> and ubinize.cfg looks like:
>> [ubifs]
>> mode=ubi
>> image=./ubifs.img
>> vol_id=0
>> vol_size=240MiB
>> vol_type=dynamic
>> vol_name=ubi_rootfs
>> vol_alignment=1
>> vol_flags=autoresize
>> ------
>> Kernel configuration  was with CONFIG_MTD_UBI_BEB_LIMIT=20.
>> and here is the kernel log:
>> [    2.695659] ubi0: attaching mtd4
>> [    3.372343] ubi0: scanning is finished
>> [    3.383066] ubi0 warning: ubi_eba_init: cannot reserve enough PEBs
>> for bad PEB handling, reserved 4, need 40
>> [    3.404320] ubi0: volume 0 ("ubi_rootfs") re-sized from 1982 to 
>> 1982 LEBs
>> [    3.411667] ubi0: attached mtd4 (name "FS", size 248 MiB)
>> [    3.417095] ubi0: PEB size: 131072 bytes (128 KiB), LEB size: 
>> 126976 bytes
>> [    3.424006] ubi0: min./max. I/O unit sizes: 2048/2048, sub-page 
>> size 512
>> [    3.430740] ubi0: VID header offset: 2048 (aligned 2048), data 
>> offset: 4096
>> [    3.437734] ubi0: good PEBs: 1990, bad PEBs: 0, corrupted PEBs: 0
>> [    3.443860] ubi0: user volume: 1, internal volumes: 1, max. volumes
>> count: 128
>> [    3.451117] ubi0: max/mean erase counter: 1/0, WL threshold: 4096,
>> image sequence number: 1590570073
>> [    3.460295] ubi0: available PEBs: 0, total reserved PEBs: 1990,
>> PEBs reserved for bad PEB handling: 4
>> [    3.469575] ubi0: background thread "ubi_bgt0d" started, PID 131
>> [    4.648718] UBIFS (ubi0:0): background thread "ubifs_bgt0_0" 
>> started, PID 136
>> [    4.676640] UBIFS (ubi0:0): start fixing up free space
>> [   10.250099] UBIFS (ubi0:0): free space fixup complete
>> [   10.269046] UBIFS (ubi0:0): UBIFS: mounted UBI device 0, volume 0,
>> name "ubi_rootfs"
>> [   10.276830] UBIFS (ubi0:0): LEB size: 126976 bytes (124 KiB),
>> min./max. I/O unit sizes: 2048 bytes/2048 bytes
>> [   10.286794] UBIFS (ubi0:0): FS size: 250142720 bytes (238 MiB, 1970
>> LEBs), journal size 9023488 bytes (8 MiB, 72 LEBs)
>> [   10.297540] UBIFS (ubi0:0): reserved for root: 0 bytes (0 KiB)
>> [   10.303406] UBIFS (ubi0:0): media format: w4/r0 (latest is w5/r0),
>> UUID A14D586A-32E2-45D7-A07E-BDC472FDF31D, small LPT model
>>
>> My question is: why does this warning about reserved 4, needed 40
>> appear? How can I solve this?
>>
> 
> Solution: Modify ubinize.cfg,  vol_size=240MiB -> vol_size=235MiB
> 
> Why does this warning about reserved 4, needed 40 appear?
> The count of bad PEBs reserved by UBI is: total_pebs(nand chip) * 
> CONFIG_MTD_UBI_BEB_LIMIT/1024 = 2048 *  20 / 1024 = 40
> When fastmap is not enabled, the count of internal PEBs reserved by UBI 
> is: UBI_LAYOUT_VOLUME_EBS[2] + WL_RESERVED_PEBS[1] + 
> EBA_RESERVED_PEBS[1] = 4
> 
> Total PEBs for UBI is 1990(good PEBs: 1990), the max count of user 
> volume PEBs is 1990 - 40 - 4 = 1946
> 
> In ubinize.cfg, vol_size=240MiB, the count of user volume PEBs is(see 
> ubinize.c in mtduitls): vi_size + (leb_size - 1) / leb_size
> vi_size = 240M = 240 * 1024 * 1024
> leb_size = 128K - 2K(ec) - 2K(vid) = 124 * 1024
> result = 1982
> 
> So there are 1990 - 1982 - 4 = 4 PEBs reserved for bad PEB handling, 
> which is less than 40.
> 
> When we set  vol_size=235MiB, the count of user volume PEBs is: (235 * 
> 1024 * 1024 + 124 * 1024 - 1) / (124 * 1024) = 1941
> 1990 - 4 - 40 = 1946, finally maybe there is message from kernel like 
> "("ubi_rootfs") re-sized from 1941 to 1946 LEBs".

Acutally, the 'vol_size' in ubinize.cfg means the size of data on 
volume(exclude vid/ec), not the size of volume(include vid/ec).




More information about the linux-mtd mailing list