JFFS2 vs. UBIFS compression

Richard Weinberger richard.weinberger at gmail.com
Sun Aug 23 11:03:31 PDT 2015


On Fri, Aug 21, 2015 at 11:00 AM, Ricard Wanderlof
<ricard.wanderlof at axis.com> wrote:
>
> I came across something odd that I wasn't really expecting the other day.
>
> On a JFFS2 file system, we have a file that is 12.25 MB in size. When
> written to an 8 MB partition, df reports that it occupies 5.9 MB. Writing
> a second copy of the file fails because the file system is full. Fair
> enough.
>
> On a similar UBIFS system (however in this case with a volume size of 32
> MB), the same file is reported by df to have occupied 7.9 MB. Writing
> multiple copies of the same file confirms that we can fit slightly more
> than 4 copies of the same file on the file system (32 MB / 7.9 MB yields
> 4.05), so 7.9 MB seems about right.
>
> Now I fully understand that getting df to report valid figures for
> compressed file systems is guesswork at best, but don't JFFS2 and UBIFS
> utilize the same compression algorithms? Consequently, the space used by
> especially large files (where the overhead is small) should be essentially
> the same for both file systems? If anything, one would expect that UBIFS,
> being newer, would better at compression than JFFS2.
>
> So what are we seeing here, is UBIFS more conservative in reporting disk
> usage, or is JFFS2 really better than UBIFS at file compression?

Both support zlib and lzo. Did you setup UBIFS and JFFS2 with the same
compression method?
Also keep in meed to run "sync" before using "df" on UBIFS.
Otherwise it will not write down to flash and report the uncompressed size.

HTH

-- 
Thanks,
//richard



More information about the linux-mtd mailing list