Lost space on JFFS2 partition

John Hall john-news1 at cambridgetechgroup.com
Thu Aug 28 06:28:18 EDT 2003


"David Woodhouse" <dwmw2 at infradead.org> wrote in message
news:1062066193.8465.1571.camel at hades.cambridge.redhat.com...

> > 1. The files in question are log files and so there are lots of
> > small writes happening. How does JFFS2 compress files? Is it on a
> > block basis or per write? If it is the latter then I could imagine
> > that compression is actually having an adverse effect when a file is
> > created from a large number of small writes.

> What do you mean by 'on a block basis'?  JFFS2 does compression within
> each log entry, which in the case of small writes is basically
> per-write. It doesn't hurt though -- if the node payload would grow on
> compression, we write it out uncompressed.

I wasn't sure how JFFS2 does its writes, i.e. whether it did each write
immediately to the flash, or whether it would build a page or block's
worth before writing to the flash. Now I see that each write is done
immediately.

> > 2. A bug in JFFS2 was causing some unused space not to be garbage
> > collected. The version of JFFS2 being used is 9 months old, so
> > perhaps I should merge a later version in anyway.

> Sort of. I think it's related to a NAND-specific bug that I fixed last
> week, where we'd consistently waste space under that usage pattern, and
> not although it's reclaimable we wouldn't account it as free in
> statfs().
>
> We still don't account it as free -- but we don't waste space nearly as
> often as we used to either; we trigger garbage-collection to fill our
> buffer rather than just padding it.

Thanks for your explanation. I guess that I need to look at upgrading
JFFS2 (or more likely MTD).

Cheers,
John






More information about the linux-mtd mailing list