Jffs2 and big file = very slow jffs2_garbage_collect_pass

Jörn Engel joern at logfs.org
Wed Jan 23 09:06:39 EST 2008


On Wed, 23 January 2008 14:16:12 +0100, Ricard Wanderlof wrote:
> >
> >It doesn't really matter whether the data degrades from a number of
> >reads or from time passing.  With a constantly high write rate, there is
> >less time for degradations then with a low write rate.
> 
> If we have a system that is only used (= powered on) rarely, then any 
> degradation from time passing could become significant.

In that case the write rate wouldn't be _constantly_ high. ;)

> The only input I have got from chip manufacturers regarding this issue is 
> that with inreasing bit densities and decreasing bit cell sizes in the 
> future, things like the probability of random bit flips are likely to 
> increase. (Somewhere there is a limit when the amount of error correction 
> needed to handle this things grows too large to make the chip practically 
> useful; say 10 error correction bits per stored bit or whatever).

If error rates increase, device drivers have to do stronger error
correction.  Quality after error correction has been done should stay
roughly the same.

Jörn

-- 
The cost of changing business rules is much more expensive for software
than for a secretaty.
-- unknown



More information about the linux-mtd mailing list