Follow-up to wearing / caching question

Charles Manning manningc2 at actrix.gen.nz
Mon Feb 7 19:22:23 EST 2005


On Tuesday 08 February 2005 08:52, Jörn Engel wrote:
> On Mon, 7 February 2005 13:51:22 -0500, Matthew Cole wrote:
> > The question posed by Martin Neilsen leads me to write in search of an
> > answer that I've been pondering for a few days.  I've been tasked with
> > approximating the lifespan of the flash (JFFS2) filesystem embedded in
> > our products.  Is there a best method for calculating the space required
> > for a fixed-size file over a given lifespan?  If we want our flash
> > filesystem to be available for an approximate lifespan of 20 years, given
> > the
> > wear-leveling duty-cycle of JFFS2, and an average block endurance of 100k
> > write/erase cycles, would I need 150% of the file's size? 200%? 1000%? 
> > The worst-case answer should be acceptable, but obviously, the
> > most-realistic case is what we're aiming for.  The actual read/write duty
> > cycle of the application is quite variable, so that adds some complexity
> > to the problem, but a good guess for now would be that it writes the
> > entire file out to flash once a minute.  But as that is an independent
> > variable, maybe someone could help me solve for that over a span of duty
> > cycles?

20 year product life? Really? What stuff are you still using that was first 
plugged in in 1985?

> Assuming non-compressable data and zero jffs2 overhead simplifies the
> calculation.
>
> In your scenario, you write the file every minute, over 20 years,
> which is about 60x24x365x20 or 10M times.  You can only write any
> individual address 100k times, so the flash would have to be 100x
> bigger than your imaginary file.
>
> Another way to look at it is an imaginary 1MiB flash.  You can write
> it 100k times, for a total of 100GiB written to it.  With 600M seconds
> in your expected 20 years, that gives you ~160 Bytes/s average write
> speed.  Not very much.
>
> Is that the calculation you were looking for?

There are two factors which skew this:

1) Garbage collection. Depending on the way the files are rewritten and freed 
up, different amounts of garbage collection rewrites will be performed. Worst 
case GC can drive up the number of writes.

2) "Squatter files". Some files live a long time and some are transient. 
Those that live a long time will tend to take up "squatters rights" on an 
area of flash which means that the rest gets written more often.


>
> Jörn




More information about the linux-mtd mailing list