Jffs2 and big file = very slow jffs2_garbage_collect_pass

Jamie Lokier jamie at shareable.org
Fri Jan 18 19:23:02 EST 2008


Jörn Engel wrote:
> If you want to make GC go berzerk, here's a simple recipe:
> 1. Fill filesystem 100%.
> 2. Randomly replace single blocks.
> 
> There are two ways to solve this problem:
> 1. Reserve some amount of free space for GC performance.

The real difficulty is that it's not clear how much to reserve for
_reliable_ performance.  We're left guessing based on experience, and
that gives only limited confidence.  The 5 blocks suggested in JFFS2
docs seemed promising, but didn't work out.  Perhaps it does work with
5 blocks, but you have to count all potential metadata overhead and
misalignment overhead when working out how much free "file" data that
translates to?  Really, some of us just want JFFS2 to return -ENOSPC
at _some_ sensible deterministic point before the GC might behave
peculiarly, rather than trying to squeeze as much as possible onto the
partition.

> 2. Write in some non-random fashion.
> 
> Solution 2 works even better if the filesystem actually sorts data
> very roughly by life expectency.  That requires writing to several
> blocks in parallel, i.e. one for long-lived data, one for short-lived
> data.  Made an impressive difference in logfs when I implemented that.

Ah, a bit like generational GC :-)

-- Jamie



More information about the linux-mtd mailing list