JFFS3 memory consumption

Artem B. Bityuckiy dedekind at infradead.org
Wed Jan 26 13:33:28 EST 2005


On Wed, 26 Jan 2005, David Woodhouse wrote:

> On Wed, 2005-01-26 at 17:11 +0000, Artem B. Bityuckiy wrote:
> > > If you use larger "chunks" you'll get better overall compression.  E.g. 
> > > a 64KiB node should compress better than a 4KiB node.  This is partially
> > > how squashfs achieves better compression that cramfs.
> 
> > Surely, but how about read degradation?
> 
> See what zisofs does. If you populate adjacent pages when decompressing,
> rather than just throwing away the extra data you had to decompress
> anyway, then it's not so bad.
Ah! I looked to isofs/compress.c:zisofs_readpage()
Looks like good idea!

Ok, we need to select how many pages will we have at one node. I suppose 
4-8 pages is enough. What digit do you imagine? 
 

--
Best Regards,
Artem B. Bityuckiy,
St.-Petersburg, Russia.




More information about the linux-mtd mailing list