Database on JFFS2?

Jörn Engel joern at wohnheim.fh-wedel.de
Wed Apr 16 07:16:08 EDT 2003


On Tue, 15 April 2003 17:23:59 +0100, Jasmine Strong wrote:
> On Tuesday, Apr 15, 2003, at 17:14 Europe/London, Jörn Engel wrote:
> >On Tue, 15 April 2003 17:11:44 +0100, Jasmine Strong wrote:
> >>Unless it would cause many erases, which would slow things down a 
> >>lot...
> >Erases get triggered by garbage collection, which depends on the
> >amount of data written, not the chunk size.
> 
> yes.  I think my two points were actually the same point taken twice :-)
> If you're only updating a few bytes of data you will end up writing
> a large proportion of log control data.  That'll end up being
> responsible for most of the erase traffic.

Actually, that shouldn't matter too much. For comparison, I did some
benchmarks using jffs2 (without compression) as a filesystem for a
ramdisk.

The benchmark wrote data to jffs2, deleted it and repeated this
several times to remove statistical noise. Horrible results.
Then I got a clue and added "sleep 6" after both writing and deleting,
getting roughly twice the performance. Why?

Under normal operation, the system is idle a lot and the garbage
collector (GC) has plenty of time to clean up the mess you made. But
the first benchmark was measuring a system without idle times, so all
writes were waiting for GC to finally free some space. Wrong.

Back to the Database:
Even if you write data in very small chunks, the system should have
enough free time to GC those fragments and reassemble them into larger
chunks with less overhead, so this doesn't matter.

Unless you permanently operate near the limit. Without the free time
for GC, this does matter.

> Still, if you need to be powerfail-safe, I can't see any way of not
> doing this.

Right.

Jörn

-- 
Sometimes, asking the right question is already the answer.
-- Unknown



More information about the linux-mtd mailing list