JFFS3 memory consumption

Artem B. Bityuckiy dedekind at infradead.org
Wed Jan 26 12:11:34 EST 2005


On Wed, 26 Jan 2005, Josh Boyer wrote:

> On Wed, 2005-01-26 at 09:57, Artem B. Bityuckiy wrote:
> > On Wed, 26 Jan 2005, David Woodhouse wrote:
> > 
> > > On Wed, 2005-01-26 at 08:44 +0000, Artem B. Bityuckiy wrote:
> > > > K1 is decreased by the Ferenc Havasi's "summary" patch well. Would be
> > > > nice to decrease K2 either. Any Ideas how to do so?
> > > 
> > > For a start, we can increase the maximum node size. Limiting it to
> > > PAGE_SIZE means we have a larger number of nodes than we'd otherwise
> > > have. If we quadruple that, we'll probably end up with about a third as
> > > many nodes. 
> > I thought about to have several 4K chunks within one node. Thus we will 
> > keep reading fast but will have few nodes number. Didi you mean this?
> 
> If you use larger "chunks" you'll get better overall compression.  E.g. 
> a 64KiB node should compress better than a 4KiB node.  This is partially
> how squashfs achieves better compression that cramfs.
Surely, but how about read degradation?

> 
> So the end result is better compression and fewer nodes.
> 
> josh
> 
> 

--
Best Regards,
Artem B. Bityuckiy,
St.-Petersburg, Russia.




More information about the linux-mtd mailing list