Jffs2 and big file = very slow jffs2_garbage_collect_pass
Jörn Engel
joern at logfs.org
Fri Jan 18 16:00:19 EST 2008
On Fri, 18 January 2008 18:39:01 +0000, Jamie Lokier wrote:
>
> Yes! I have exactly the same problem, except I'm using 2.4.26-uc0,
> and it's a 1MB partition (16 blocks of 64kbytes).
>
> I am tempted to modify the JFFS2 code to implement a hard limit of 50%
> full at the kernel level.
>
> The JFFS2 docs suggest 5 free blocks are enough to ensure GC is
> working. In my experience that does often work, but occasionally
> there's a catastrophically long and CPU intensive GC.
If you want to make GC go berzerk, here's a simple recipe:
1. Fill filesystem 100%.
2. Randomly replace single blocks.
There are two ways to solve this problem:
1. Reserve some amount of free space for GC performance.
2. Write in some non-random fashion.
Solution 2 works even better if the filesystem actually sorts data
very roughly by life expectency. That requires writing to several
blocks in parallel, i.e. one for long-lived data, one for short-lived
data. Made an impressive difference in logfs when I implemented that.
And of course academics can write many papers about good heuristics to
predict life expectency. In fact, they already have.
Jörn
--
"Security vulnerabilities are here to stay."
-- Scott Culp, Manager of the Microsoft Security Response Center, 2001
More information about the linux-mtd
mailing list