journaling

Bjorn Wesen bjorn.wesen at axis.com
Wed Nov 7 21:12:26 EST 2001


On Wed, 7 Nov 2001, Jörn Engel wrote:
> On a regular hard drive, user data journaling maximizes write
> performance but will only achieve decent read performance when data is
> read successively that was written successively, too. Very unlikely.

On the other hand, a "normal" filesystem does not optimize the read speed
in any other way either. I don't see how you can optimize in another way
than putting data which belong together logically together
physically.. the LFS can do this even better than traditional systems
because data is moved naturally during the GC process and can be
reorganized.

The main problem with logstructured filesystems is that it's difficult to
scalably _locate_ the data you want to read. That's why JFFS1,2 have long
mount-times. Checkpointing the in-core representation would help a bit,
but it would still not scale well with filesystem size.

Some sort of hierarchical checkpointing could perhaps work.. I haven't
read very much about that yet but it's a natural issue for JFFS3 :)

The second problem with LFS is the GC process. After some time, you
reach a state where the log _is_ full and every write has to force a part
of the GC. There is a lot of tweaking which can be done in that area. As
you note below it solves fragmentation as a byproduct, but getting good
performance and low latency is difficult.

> On the other hand, it is trivial to write a disk defragmentation tool
> for these filesystems. Just read and rewrite everything in the
> preferred order and the filesystem logic takes care of the rest.

/BW






More information about the linux-mtd mailing list