UBIFS volume corruption (bad node at LEB 0:0)
David Bergeron
mho.linux-mtd at b2n.ca
Tue Jan 20 16:47:29 EST 2009
On 2009-01-20, at 4:01, Artem Bityutskiy wrote:
> Yeah, our current theory is that you have your script running, which
> means it is opened, and it is orphan now, and you re-mount the FS R/O,
> and end up with a R/O FS + an orphan. We never considered this
> scenario
> before. And the scenario is a little nasty because UBIFS may want to
> write when you release the orphan (close the file), but the FS is R/O.
> We'll work on this, thanks for excellent bug description!
I'm afraid I've misguided you here. It's actually not the case. The
script is *not* open when I re-mount r/o, the kernel simply will not
allow that to happen, and I get a "Device or resource busy" error if I
even try.
considering:
exec /bin/sh -xc "lsof; sync; sleep 2; lsof; sync; sleep 2; \
mount -o remount,ro /; sleep 2; reboot -df;"
The running & orphaned script is history as soon as 'exec'. lsof
confirms this. UBIFS gets two sync() + 4 seconds before the filesystem
goes read-only to clean up, it should be plenty of time and opportunity.
There's gotta be something else. Especially since it doesn't break if
I tell the kernel to mount 'rw' in the first place (bad practice, but
a possible temporary workaround for me nonetheless).
One thing I wonder, UBIFS itself mentions "UBIFS: mounted read-only",
is it just information for the human reader? or is UBIFS perhaps
behaving as in read-only operation somewhere, even /after/ becoming
writable, which could cause it mishandle the cleaning up of the orphan?
I very much appreciate all the time you've put looking at this.
Best regards,
-david
More information about the linux-mtd
mailing list