JFFS2 an nodes checking
Artem B. Bityuckiy
abityuckiy at yandex.ru
Tue Sep 28 10:26:32 EDT 2004
David Woodhouse wrote:
> On Tue, 2004-09-28 at 17:57 +0400, Artem B. Bityuckiy wrote:
>
>>Sorry, I don't understand. Suppose, after unclean reboot the bad last
>>node appears. Before any write, this last node will be detected *before
>>write* since the iget() will be called before it. Is it?
>
>
> True, but it won't necessarily be _deleted_ so it could still be there
> on the _next_ boot.
Yes, you are right. But we may handle this situation. Suppose we've
found that the last node is bad. In this can we write new node
containing the same data area (the area range is known since the header
CRC is good). Thus, that last bad node will be obsoleted and the new
last node will be good. If there are will be new unclean reboot, we will
have two bad nodes at the end.
Only after obsoleting last bad node we allow new writes.
What do you think now? :-)
>
> You have to make the read/write code capable of dealing with the fact
> that the full rbtree hasn't been built. Possibly you sort through the
> raw nodes and put them into _pools_, each covering something like a
> 256KiB of the file. Then when you get a read or write of any such range,
> you check the CRCs for the nodes in that _range_ and build the full map
> only for that range of the file. It sounds painful, to be honest -- I'd
> rather just tell people not to use such large files on JFFS2 -- split it
> up into multiple files instead.
>
I'm really going to do such the optimization and your idea is
complicated enough. I'd like to find easier solution and don't introduce
additional complicated things to the already complicated file system :-)
Ok, I'll think on this problem.
For now, the Idea with checking only the last node seems to me better.
But I didn't think enough about it yet.
--
Best Regards,
Artem B. Bityuckiy,
St.-Petersburg, Russia.
More information about the linux-mtd
mailing list