UBIRENAME failure modes

Richard Weinberger richard.weinberger at gmail.com
Thu Oct 29 06:23:01 EDT 2020


Maurice,

On Tue, Oct 27, 2020 at 10:59 PM Maurice Smulders
<maurice.smulders at genevatech.net> wrote:
>
> I know that UBI is built for resilience, but I have a simple question.
> This is inside an unserviceable device once deployed, so we have to be
> 99.99999% sure that it is recoverable in a SHTF case..
>
> volume layout:
> - rootfs_0 and 1 : UBIFS
> - linux_0 and 1 (raw uboot loadable linux kernel - NO filesystem
>
> when loading a firmware update, the script creates a rootfs_new and
> linux_new, and once complete and the code is verified to be correct an
> ubirename is executed. The rootfs installation is done in the
> alternate volume, so actual running code is not affected.
>
> + time ubirename /dev/ubi0 linux_1 linux_old_1 rootfs_1 rootfs_old_1
> linux_new linux_1 rootfs_new rootfs_1
> real    0m 2.32s
> user    0m 0.00s
> sys     0m 0.69s
>
> How big is the critical window of this rename? 0.69s or (way) less? If
> power is lost during this very short 0.69s, what could happen? Is
> there any chance of an undefined state, or would either the old or new
> config be active?

Volume rename is an atomic operation, either all volumes got renamed or none.
UBI makes use of the atomic LEB change feature. It alters the volume table and
atomically exchanges it with a newer version.

So there should be no critical window, if so, you've found a bug. :-)

-- 
Thanks,
//richard



More information about the linux-mtd mailing list