[RESEND]: Kernel 4.14: UBIFS+SQUASHFS: Device fails to boot after flashing rootfs volume
Phillip Lougher
phillip at squashfs.org.uk
Mon May 24 00:07:26 PDT 2021
> On 24/05/2021 07:12 Pintu Agarwal <pintu.ping at gmail.com> wrote:
>
>
> On Sun, 23 May 2021 at 23:01, Sean Nyekjaer <sean at geanix.com> wrote:
> >
>
> > > I have also tried that and it seems the checksum exactly matches.
> > > $ md5sum system.squash
> > > d301016207cc5782d1634259a5c597f9 ./system.squash
> > >
> > > On the device:
> > > /data/pintu # dd if=/dev/ubi0_0 of=squash_rootfs.img bs=1K count=48476
> > > 48476+0 records in
> > > 48476+0 records out
> > > 49639424 bytes (47.3MB) copied, 26.406276 seconds, 1.8MB/s
> > > [12001.375255] dd (2392) used greatest stack depth: 4208 bytes left
> > >
> > > /data/pintu # md5sum squash_rootfs.img
> > > d301016207cc5782d1634259a5c597f9 squash_rootfs.img
> > >
> > > So, it seems there is no problem with either the original image
> > > (unsquashfs) as well as the checksum.
> > >
> > > Then what else could be the suspect/issue ?
> > > If you have any further inputs please share your thoughts.
> > >
> > > This is the kernel command line we are using:
> > > [ 0.000000] Kernel command line: ro rootwait
> > > console=ttyMSM0,115200,n8 androidboot.hardware=qcom
> > > msm_rtb.filter=0x237 androidboot.console=ttyMSM0
> > > lpm_levels.sleep_disabled=1 firmware_class.path=/lib/firmware/updates
> > > service_locator.enable=1 net.ifnames=0 rootfstype=squashfs
> > > root=/dev/ubiblock0_0 ubi.mtd=30 ubi.block=0,0
> > >
> > > These are few more points to be noted:
> > > a) With squashfs we are getting below error:
> > > [ 4.603156] squashfs: SQUASHFS error: unable to read xattr id index table
> > > [...]
> > > [ 4.980519] Kernel panic - not syncing: VFS: Unable to mount root
> > > fs on unknown-block(254,0)
> > >
> > > b) With ubifs (without squashfs) we are getting below error:
> > > [ 4.712458] UBIFS (ubi0:0): UBIFS: mounted UBI device 0, volume 0,
> > > name "rootfs", R/O mode
> > > [...]
> > > UBIFS error (ubi0:0 pid 1): ubifs_read_node: bad node type (255 but expected 9)
> > > UBIFS error (ubi0:0 pid 1): ubifs_read_node: bad node at LEB
> > > 336:250560, LEB mapping status 1
> > > Not a node, first 24 bytes:
> > > 00000000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
> > > ff ff ff ff
> > >
> > > c) While flashing "usrfs" volume (ubi0_1) there is no issue and device
> > > boots successfully.
> > >
> > > d) This issue is happening only after flashing rootfs volume (ubi0_0)
> > > and rebooting the device.
> > >
> > > e) We are using "uefi" and fastboot mechanism to flash the volumes.
> > Are you writing the squashfs into the ubi block device with uefi/fastboot?
> > >
> > > f) Next I wanted to check the read-only UBI volume flashing mechanism
> > > within the Kernel itself.
> > > Is there a way to try a read-only "rootfs" (squashfs type) ubi volume
> > > flashing mechanism from the Linux command prompt ?
> > > Or, what are the other ways to verify UBI volume flashing in Linux ?
> > >
> > > g) I wanted to root-cause, if there is any problem in our UBI flashing
> > > logic, or there's something missing on the Linux/Kernel side (squashfs
> > > or ubifs) or the way we configure the system.
>
> >
> > Have you had it to work? Or is this a new project?
> > If you had it to work, i would start bisecting...
> >
>
> No, this is still experimental.
> Currently we are only able to write to ubi volumes but after that
> device is not booting (with rootfs volume update).
> However, with "userdata" it is working fine.
>
> I have few more questions to clarify.
>
> a) Is there a way in kernel to do the ubi volume update while the
> device is running ?
> I tried "ubiupdatevol" but it does not seem to work.
> I guess it is only to update the empty volume ?
> Or, maybe I don't know how to use it to update the live "rootfs" volume
>
> b) How to verify the volume checksum as soon as we finish writing the
> content, since the device is not booting ?
> Is there a way to verify the rootfs checksum at the bootloader or
> kernel level before mounting ?
>
> c) We are configuring the ubi volumes in this way. Is it fine ?
> [rootfs_volume]
> mode=ubi
> image=.<path>/system.squash
> vol_id=0
> vol_type=dynamic
> vol_name=rootfs
> vol_size=62980096 ==> 60.0625 MiB
>
> Few more info:
> ----------------------
> Our actual squashfs image size:
> $ ls -l ./system.squash
> rw-rr- 1 pintu users 49639424 ../system.squash
>
> after earse_volume: page-size: 4096, block-size-bytes: 262144,
> vtbl-count: 2, used-blk: 38, leb-size: 253952, leb-blk-size: 62
> Thus:
> 49639424 / 253952 = 195.46 blocks
>
> This then round-off to 196 blocks which does not match exactly.
> Is there any issue with this ?
>
> If you have any suggestions to debug further please help us...
>
>
> Thanks,
> Pintu
Three perhaps obvious questions here:
1. As an experimental system, are you using a vanilla (unmodified)
Linux kernel, or have you made modifications. If so, how is it
modified?
2. What is the difference between "rootfs" and "userdata"?
Have you written exactly the same Squashfs image to "rootfs"
and "userdata", and has it worked with "userdata" and not
worked with "rootfs".
So far it is unclear whether "userdata" has worked because
you've written different images/data to it.
In other words tell us exactly what you're writing to "userdata"
and what you're writing to "rootfs". The difference or non-difference
may be significant.
3. The rounding up to a whole 196 blocks should not be a problem.
The problem is, obviously, if it is rounding down to 195 blocks,
where the tail end of the Squashfs image will be lost.
Remember this is exactly what the Squashfs error is saying, the image
has been truncated.
You could try adding a lot of padding to the end of the Squashfs image
(Squashfs won't care), so it is more than the effective block size,
and then writing that, to prevent any rounding down or truncation.
Phillip
More information about the linux-mtd
mailing list