jffs2: Excess summary entries
Thomas.Betker at rohde-schwarz.com
Thomas.Betker at rohde-schwarz.com
Mon Nov 9 05:28:49 PST 2015
Hello Wei:
> > I prefer:
> > if (jffs2_sum_active() && *retlen == len) {
> > ...
> > res = jffs2_sum_add_kvec(...)
> > ...
> > }
> >
> > In the case that part of node has been written to flash, this whole
node
> > will be marked as dirty node, only in memory, not marked without
> > JFFS2_NODE_ACCURATE on flash.
> >
> > If the summary is stored when *retlen != len, there are two cases:
> >
> > * In most case, another write with the same node info performs
> > successfully later, the node written partially before will be marked
> > as obsolete node when scan, and we won't read from it
> > * This node is the newest node about this region, it will be treated
> > as normal node when scan, and we may read the data already
corrupted.
> > Yet it won't break any rules of JFFS2 and lead to a muddle
>
> I forgot there's crc check about the data, so in this case, this node
> couldn't pass the crc check and will be marked as obsolete too :)
>
> > The node written partially will be treated as normal node in a full
scan
> > routine too, so I think we should mark this node as dirty on flash in
> > the case that *retlen != len.
Yes, this is basically what I have been seeing in my tests: After the
first write has failed, the node is written again, and only the second
node is used upon remount. So I can guess we can go either way: dropping
the summary entry, or keeping it.
By now, I have come to the conclusion that it's probably cleaner to drop
the summary entry, just as it is done when write buffering is enabled.
This way, we can be sure that the node data is ignored on reboot the same
way as it is ignored at the moment when the MTD write fails. And while
extra summary entries won't usually cause any problems, there is no
advantage in wrting them out if they are ignored anyway.
Best regards,
Thomas Betker
More information about the linux-mtd
mailing list