mtd/fs/jffs3 JFFS3design.tex,1.34,1.35

Artem Bityuckiy dedekind at infradead.org
Wed Apr 13 13:41:11 EDT 2005


Update of /home/cvs/mtd/fs/jffs3
In directory phoenix.infradead.org:/tmp/cvs-serv438

Modified Files:
	JFFS3design.tex 
Log Message:
One more section

Index: JFFS3design.tex
===================================================================
RCS file: /home/cvs/mtd/fs/jffs3/JFFS3design.tex,v
retrieving revision 1.34
retrieving revision 1.35
diff -u -r1.34 -r1.35
--- JFFS3design.tex	13 Apr 2005 13:14:31 -0000	1.34
+++ JFFS3design.tex	13 Apr 2005 17:41:07 -0000	1.35
@@ -64,14 +64,14 @@
 NAND flash (nandsim module). In all experiments below the whole
 flash comprises single partition where JFFS2 filesystem is put.
 
-\begin{flushleft}\textbf{Experiment 1}\end{flushleft}
-
+\begin{description}
+\item[Experiment 1]
 Fulfill the JFFS2 file system with a typical Linux root FS, namely
 \texttt{/bin}, \texttt{/sbin}, \texttt{/etc}, \texttt{/boot} and
 partially \texttt{/usr} from the x86 Fedora Core 2 distribution.
 
 We observed the following.
-The total files number was 4372 - 719 directories and 2995 regular
+The total files number was 4372 -- 719 directories and 2995 regular
 files (all files' nodes were made pristine). The summary size of all files was 116 MiB.
 The summary size of all files was 116 MiB (compression was enabled).
 
@@ -89,7 +89,14 @@
 \end{tabular}
 \end{center}
 
-\begin{flushleft}\textbf{Experiment 2}\end{flushleft}
+Note, that all inodes were in the inode cache in our experiments,
+which isn't that typical
+for the real-life system though. If no inodes were in the inode cache,
+only 658/1778/12706 KiB would be consumed by the \texttt{jffs2\_node\_ref}
+objects. However, opening any
+file or looking up in any directory would require additional RAM.
+
+\item[Experiment 2]
 The following command on the same empty 64 MiB JFFS2 file
 system
 \begin{quote}
@@ -107,7 +114,7 @@
 \end{tabular}
 \end{center}
 
-\begin{flushleft}\textbf{Experiment 3}\end{flushleft}
+\item[Experiment 3]
 The following command on the same empty 64 MiB JFFS2 file
 system
 \begin{quote}
@@ -125,22 +132,17 @@
 \end{tabular}
 \end{center}
 
-Note, that all inodes were in the inode cache in our experiments,
-which isn't that typical
-for the real-life system though. If no inodes were in the inode cache,
-only 658/1778/12706 KiB would be consumed by the \texttt{jffs2\_node\_ref}
-objects. However, opening any
-file or looking up in any directory would require additional RAM.
-
-It is worth noting here, that in JFFS2 memory which is consumed even in
-the case when no files are opened is called \emph{in-core memory}.
+It is worth noting here, that in JFFS2 memory which is consumed even
+when no files are opened is called \emph{in-core memory}.
 In-core memory mostly consists of \texttt{jffs2\_node\_ref} objects.
+\end{description}
 
-Assuming the amount consumed memory grows linearly with the flash size
-(which I believe is true) we'd get the following numbers for 1GB flash
-in the same experiments:
+Assuming the amount of consumed memory grows linearly with the flash size
+(which I believe is true) we'd have the following numbers for 1GB flash
+in similar experiments.
 
-\begin{flushleft}\textbf{Experiment 1a (imaginary)}\end{flushleft}
+\begin{description}
+\item[Experiment 1a (imaginary)]~
 \begin{center}
 \begin{tabular}{ll}
 jffs2\_node\_ref    & 10.3 MiB \\
@@ -152,7 +154,7 @@
 \end{tabular}
 \end{center}
 
-\begin{flushleft}\textbf{Experiment 2a (imaginary)}\end{flushleft}
+\item[Experiment 2a (imaginary)]~
 \begin{center}
 \begin{tabular}{ll}
 jffs2\_node\_ref    & 27.8 KiB \\
@@ -163,7 +165,7 @@
 \end{tabular}
 \end{center}
 
-\begin{flushleft}\textbf{Experiment 3a (imaginary)}\end{flushleft}
+\item[Experiment 3a (imaginary)]~
 \begin{center}
 \begin{tabular}{ll}
 jffs2\_node\_ref    & 198.5 KiB \\
@@ -173,9 +175,71 @@
 total               & 743.6 MiB
 \end{tabular}
 \end{center}
+\end{description}
 
 Needless to note that this is unacceptable for embedded systems.
 
+The following memory consumption problems are distinguished.
+\begin{description}
+\item[In-core RAM.] 
+Due to the design JFFS2 needs to refer each node associating a
+small RAM object with it resulting in substantial RAM consumption. The
+amount of in-core RAM depends linearly on the amount of information on
+the flash. More data are put to the FS, more RAM JFFS2 takes.
+
+\item[Inode build RAM.] And again due to JFFS2 design built inodes (those who
+are in the inode cache, which happens when, say, a file is opened or
+a directory is browsed) consume a lot of RAM (). For files JFFS2 needs to
+store in RAM fragment trees (\texttt{jffs2\_node\_frag} and
+\texttt{jffs2\_full\_dnode} objects) and for directories -- children
+lists (\texttt{jffs2\_full\_dirent} objects). Larger file you open, more
+RAM is require for its fragtree. Larger directory (i.e., more directory
+entries it has) you browse, more RAM is needed.
+
+\item[Peak RAM usage.] The JFFS2 memory consumption also depends on how data is
+written. Each transaction goes directly to Flash (through the
+write-buffer for page-based Flashes like NAND), i.e., there is
+Write-Through cache in Linux/JFFS2. Consequently, small transactions
+result in a great deal of small nodes on Flash and hence, much memory is required,
+for in-core objects and fragtree. Later small nodes will be merged by
+GC (maximum \texttt{PAGE\_SIZE} bytes of a files' data may be stored in
+one node) and the amount of consumed memory will be much lower. But the peak
+JFFS2 memory usage is very high.
+\end{description}
+
+%
+% Mount time
+%
+\subsection{Mount time}
+The slow mount is the most prominent and upsetting JFFS2 feature. The
+reason is, again, the JFFS2 design which doesn't assume any definite
+structure on the Flash media but instead, makes use of one big log made
+up of nodes -- the only JFFS2 on-flash data structure. This design
+provides several very nice JFFS2 features as:
+\begin{itemize}
+\item very economical flash usage -- data usually takes as much flash
+space as it actually need, without wasting much space as in case of
+block devices like HDD;
+\item admitting of very efficient utilizing of "on-flight" compression which
+allows to fit a lot of data to Flash;
+\item relatively quick read and write operations.
+\end{itemize}
+
+But unfortunately, there are several drawbacks, for example -- slow mount.
+Because of absence of any Flash structure, JFFS2 needs to scan the whole
+Flash partition to identify all nodes and to build the file system map. This
+takes much time, especially on large Flash partitions.
+
+To increase the mount speed JFFS2 performs as few work as possible during
+the mount process and deffers
+a considerable amount of work to GC thread. GC thread continues
+working in backgroung (this process is called "checking" in JFFS2 as it
+mostly check nodes' CRC checksums, albeit it also discovers obsolete
+nodes building temprary fragment trees and direntry lists). And the
+important thing is that during this checking proces no one could write to
+the file system, only read operations aren't forbidden. This is an
+additional cumberstone.
+
 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 %
 % CHECKSUM
@@ -328,7 +392,7 @@
 
 Virtual erase blocks have also disadvantages. The bigger size affects
 garbage collection as larger entities have to be handled, which
-degrades GC efficency and performance.
+degrades GC efficiency and performance.
 
 In order to keep accounting simple the number of concatenated blocks
 must be a power of two.
@@ -338,7 +402,7 @@
 The concatenation of physical blocks to virtual blocks must deal with
 bad blocks in the virtual block rather than treating the whole virtual
 block as bad as it is currently done in JFFS2. Therefor JFFS3 must treat the
-physical blocks inside a virtual block seperately. This implies the
+physical blocks inside a virtual block separately. This implies the
 clean marker write per physical block after erase and the limitation
 of writes to physical block boundaries.
 





More information about the linux-mtd-cvs mailing list