JFFS2 an nodes checking

Josh Boyer jdub at us.ibm.com
Tue Sep 28 10:31:03 EDT 2004


On Tue, 2004-09-28 at 09:04, David Woodhouse wrote:
> You have to make the read/write code capable of dealing with the fact
> that the full rbtree hasn't been built. Possibly you sort through the
> raw nodes and put them into _pools_, each covering something like a
> 256KiB of the file. Then when you get a read or write of any such range,
> you check the CRCs for the nodes in that _range_ and build the full map
> only for that range of the file. It sounds painful, to be honest -- I'd
> rather just tell people not to use such large files on JFFS2 -- split it
> up into multiple files instead.

Yes, large files are bad, but aren't the only source of such a time
delay.  IIRC, the problem is really just a matter of the number of nodes
per file.  So couldn't you have a small file with a large number of
writes within that file that has the same effect?  (Until the obsoleted
nodes are actually deleted that is.)

I'm thinking of fifos here too...

josh





More information about the linux-mtd mailing list