Understanding UBIFS flash overhead
Artem Bityutskiy
dedekind at infradead.org
Mon Oct 13 02:48:47 EDT 2008
On Fri, 2008-10-10 at 14:00 -0700, Deepak Saxena wrote:
> I'm working on getting UBIFS going on the OLPC XO and am trying to
> understand what I am seeing in regards to file system size.
Nice. I think UBIFS is quite suitable for OLPC and should improve its
boot time and, presumably, performance.
> I have partitioned the 1GiB mtd device into a 32MiB JFFS boot partition
> for OFW, a 128KiB partition to hold the Redboot partition table,
> and the remainder (991.625) for use by UBI.
OK.
> The NAND device has 128KiB EBs with a 2KiB pages and we are running w/o
> sub-page writes.
Right. Last time I looked at this I found out that your NAND supports
sub-pages, but your Marvell controller does not, unfortunately.
> Plugging this into the overhead calculation, we get:
>
> SP PEB Size 128KiB
> SL LEB Size 128KiB - 2 * 2KiB = 124 KiB
> P Total PEBs 991.625MiB / 128KiB = 7933
> B Reserved PEBsg 79(1%)
> O Overhead= SP - SL 4KiB
>
>
> UBI Overhead = (B + 4) * SP + O * (P - B - 4)
> = (79 + 4) * 128Kib + 4 KiB * (7933 - 79 - 4)
> = 42024 KiB
> = 329.3125 PEBs (round to 329)
>
> This leaves us with 7604 PEBs or 973312KiB available for user data.
OK.
> At boot up, I see:
>
> UBIFS: file system size: 995237888 bytes (971912 KiB, 949 MiB,
> UBIFS: journal size: 9023488 bytes (8812 KiB, 8 MiB, 72 LEBs)
Note, this FS size (949 MiB) is the size which UBIFS will use for its
"main area". Main area includes all the FS, the index, and the journal.
So this does not mean you'll have 949MiB for your FS data. You'll have
slightly less.
> 'df' returns:
>
> Filesystem Size Used Avail Use% Mounted on
> mtd0 822M 242M 581M 30% /
>
> I expect some overhead, but I'm really wondering where over 100MiB of
> space went!
>
> This is 2.6.25 with UBI 2.6.25 tree merged in. Does that tree have
> any bugfixes from 2.6.26+ backported?
I have several comments.
1. We've improved df reporting, but we have not updated the back-port
trees for long time, so the improvements were not there. I've just
updated all back-ports and they contain all the recent patches we
consider stable. It is basically identical to ubifs-2.6.git now. Please,
update, things will become better. However, do not expect df to tell you
precise information anyway. See below.
2. Let me first describe JFFS2 experience we have had. In statfs() (and
thus, df) JFFS2 reports physical flash space. There are some challenges
in calculating this space, and JFFS2 often lies in df. For example, it
may say it has 20MiB free, but if you start writing on it, you'll be
able to write only 16MiB file. Our user-space developers complained
about this several times. So in UBIFS we decided to report _worst-case_
flash space because we thought it is better not to tell less, but to be
honest.
Please, read here for information about UBIFS flash space prediction
challenges: http://www.linux-mtd.infradead.org/doc/ubifs.html#L_spaceacc
This should shed some light.
In short, I'll conclude with few items here:
* I most of the cases UBIFS reports _less_ space that it really has.
This is because it reports worst-case digits, and worst-case
scenario happens very-very rarely. Just try to write a file and see.
* It is very difficult to report precise flash space. This was the
issue in JFFS2, but it is even more of the issue in UBIFS because
of write-back.
--
Best regards,
Artem Bityutskiy (Битюцкий Артём)
More information about the linux-mtd
mailing list