Jffs2 and big file = very slow jffs2_garbage_collect_pass

Ricard Wanderlof ricard.wanderlof at axis.com
Wed Jan 23 08:16:12 EST 2008


On Wed, 23 Jan 2008, Jörn Engel wrote:

> On Wed, 23 January 2008 12:57:09 +0100, Ricard Wanderlof wrote:
>>
>> Perhaps it's possible to devise something that at least accomplishes part
>> of the goal. Such as when writing a new block, also write some statistical
>> information such as the number of read accesses since the previous write
>> (or power up), or the reason for writing (new data, gc because of
>> bitflips, ...) and a write counter. Something of that nature.
>
> I'm still fairly unconvinced about the read accounting.  We could do
> something purely stochastic like accounting _every_ read, but just with
> a probability of, say, 1:100,000.  That would still, within statistical
> jitter, behave the same for everyone.  But once we depend on the average
> mount time of systems, I'm quite unhappy with the solution.

I think you are right. An error counter should be sufficient to get enough 
statistics to determine if a block has begun to go bad.

>> I'd be a bit wary of this with NAND chips some of which have a 100 000
>> maximum erase/write cycle specification, though. And I think that
>> especially when nearing the maximum value and going beyond it, that there
>> is some bit decay occurring over time and not just from reading.
>
> It doesn't really matter whether the data degrades from a number of
> reads or from time passing.  With a constantly high write rate, there is
> less time for degradations then with a low write rate.

If we have a system that is only used (= powered on) rarely, then any 
degradation from time passing could become significant.

> Problematic would be to have a high write rate for a while, then a very
> low write rate that allows data to rot for a long time.  And also this
> depends on your numbers being representative for every flash chip. ;)

Yes. And the latter is very true. Our tests were only of a certain chip 
type from a certain manufacturer, and of course other chips might behave 
differently.

The only input I have got from chip manufacturers regarding this issue is 
that with inreasing bit densities and decreasing bit cell sizes in the 
future, things like the probability of random bit flips are likely to 
increase. (Somewhere there is a limit when the amount of error correction 
needed to handle this things grows too large to make the chip practically 
useful; say 10 error correction bits per stored bit or whatever).

/Ricard
--
Ricard Wolf Wanderlöf                           ricardw(at)axis.com
Axis Communications AB, Lund, Sweden            www.axis.com
Phone +46 46 272 2016                           Fax +46 46 13 61 30



More information about the linux-mtd mailing list