jffs2_get_inode_nodes() very very slow
Thomas Gleixner
tglx at linutronix.de
Wed Feb 2 05:26:51 EST 2005
On Wed, 2005-02-02 at 10:05 +0100, Rudi Engelbertink wrote:
> The powerfail tests are done by:
> A. a clock. Just turn off and on the power every 15 minutes and start the
> an application which logs 2 40-60 byte events every second.
> ...
> Yes, the root is accessable but the directory where the logging is stored,
> is unavailable for several minutes.
You hit the worst case for JFFS2.
Your event logging creates tons of small nodes for your logfiles.
Actually are about 96000 very small nodes on the chip, so the mount time
is not surprising. This also will use a quite big amount of memory.
We have no real cure for this at the moment. We have this scenario in
our design list for JFFS3. I remember that somebody else came up with
this issue sometime ago. IIRC changing the log method did help a bit.
while true
do
log_event
let cnt=cnt+1
if [ $cnt -ge $LIMIT ]
then
closelog
cat log.small >>log.big
rm log.small
fi
done
This converts the small nodes to bigger nodes when the data are appended
to log.big. I guess garbage collection should kick in quite fast and
clean up the small nodes. It might not totally go away, but it should be
much better than now. This will also give you more capacity on your
partition as the small nodes consist mostly of node overhead.
You may also try YAFFS for the logging partition. It should deal with
this situation a bit better.
tglx
More information about the linux-mtd
mailing list