[UBIFS] Filesystem capacity

Artem Bityutskiy dedekind at infradead.org
Tue Feb 17 04:31:23 EST 2009


On Tue, 2009-02-17 at 09:16 +0000, Jamie Lokier wrote:
> Artem Bityutskiy wrote:
> > OK, then indeed Adrian wrote exactly the right thing. You have huge
> > 4144-byte uncompressible nodes. You fit 3 of them to each eraseblock,
> > and you waste 3440 bytes in each eraseblock. JFFS2 would jam a little
> > more data, because it can split big blocks on parts.
> > 
> > In real life you will likely have compressible data, and many small
> > files, so you will have small data nodes and many inode nodes, which
> > are 160 bytes in size, so you will fit more.
> 
> Also, in real life, JFFS2 will behave badly when the filesystem is
> completely full.  Its garbage collector can use 100% CPU for minutes
> even on a small NOR flash, if the filesystem fills up too much.  It
> happens because it's difficult to reorganise the data when there's no
> spare room.
> 
> I wonder if the "lost" space in UBIFS is helpful to prevent long
> garbage collection cycles?

In JFFS2 GC is random, which means it may:
1. Pick an full EB to GC, this is how JFFS2 does wear-levelling
2. Pick a non-optimal EB to GC, which means that instead of picking EB
with largest amount of dirty space, JFFS2 picks an EB with not so much
dirty space

In UBIFS, we always try pick the most optimal LEB, with most of dirty
space. We have hashtables for this, where we keep track of such LEBs.
This makes UBIFS GC work faster.

And yes, in a way the wasted space helps. And in UBIFS the GC will not
even try GC-ing an LEB if the amount of dirty+free space in it is less
than NAND page size.

However, UBIFS may be optimized and one may teach the GC to try putting
small nodes to the wasted areas at the ends of eraseblocks. We just did
not do this. This is one more way to improve UBIFS.

-- 
Best regards,
Artem Bityutskiy (Битюцкий Артём)




More information about the linux-mtd mailing list