JFFS3 & performance

Josh Boyer jdub at us.ibm.com
Fri Jan 21 18:54:24 EST 2005


On Fri, 2005-01-21 at 14:46 -0800, Jared Hulbert wrote:
> New idea.
> 
> Why should we waste time compressing the uncompressable?  JFFS2
> actually spent alot of time compressing.
> 
> I can think of a few possible mechanisms:
> a) extension based
>         -Don't compress files with .mpg, .jpg, .avi, .gz, etc.  User
> defined list?
> b) test first data
>          Compress the first X sized chunk of data to determine if the
> file is compressable.  Make  determination and write data.
> c) test node by node
>           If the last node didn't compress, stop compressing file.

Both b) and c) don't work.  Binary files with large sections of zeros
between seemingly random data compress quite well.  With those schemes
you loose the compression benefit of all those zeros (or any other
repeating pattern).

Option a) could work, maybe.  You can argue that file extensions are
more of a Winders mechanism though.  None of those files have to end in
a specific extension on Linux.  E.g. someone can make a foo.gz, copy it
to foo.notgz, and gzip will still grok it.

But if the list was user definable as you suggested, then users can tune
to their specific usage.

Or, as has been suggested before, you could use xattrs to do per-file
compression.  This is probably the most generic option.

josh





More information about the linux-mtd mailing list