mtd/fs/jffs3 JFFS3design.tex,1.40,1.41
Artem Bityuckiy
dedekind at infradead.org
Thu Apr 21 12:50:24 EDT 2005
Update of /home/cvs/mtd/fs/jffs3
In directory phoenix.infradead.org:/tmp/cvs-serv21769
Modified Files:
JFFS3design.tex
Log Message:
Update and refine the DCP chapter.
Index: JFFS3design.tex
===================================================================
RCS file: /home/cvs/mtd/fs/jffs3/JFFS3design.tex,v
retrieving revision 1.40
retrieving revision 1.41
diff -u -r1.40 -r1.41
--- JFFS3design.tex 20 Apr 2005 16:10:38 -0000 1.40
+++ JFFS3design.tex 21 Apr 2005 16:50:20 -0000 1.41
@@ -336,71 +336,112 @@
\begin{itemize}
\item are associated with regular files;
\item refer all the valid nodes of a regular file inode;
-\item allow not to keep in-core references to all the inode data nodes,
-but only references to the inode data checkpoints;
\item play the role of JFFS2 fragtrees, i.e., allow to quickly locate
positions of data nodes for any given file data range.
\end{itemize}
Each regular file inode is associated with one or more data
-checkpoints. Each DCP corresponds to a fixed inode data range of size $R$.
-I.e., if the size of the regular file is $< R$, it has
-only one associated DCP node. If the size of the file is $> {R}$ but
-$< {2R}$ then there will be two associated DCP and so forth.
-Obviously, $R$ value is multiple of \texttt{PAGE\_SIZE}.
-
-Each DCP node carries the following information:
-\begin{itemize}
-\item index $I$
-which defines the DCP range (the file range described by DCP is
-${\lbrack}I{\cdot}R, I{\cdot}(R + 1)\rbrack$);
-
-\item version, in order to distinguish valid and obsolete DCP
-nodes;
-
-\item the list of data nodes belonging to the DCP range; the list is
-sorted by the data node offsets, i.e., is effectively the fragtree of
-the DCP range.
-\end{itemize}
-
-The above mentioned list is essentially an array of objects, containing
-the following:
+checkpoints. The number of the associated DCP depends on the file size.
+Small files have only one correspondent DCP, large files may have many
+associated data checkpoints.
+
+Each DCP corresponds to a fixed data range of file $R$, or
+\emph{DCP range}.
+The idea behind such a splitting is to facilitate:
\begin{itemize}
-\item the data node range;
-\item the data node physical address of Flash.
+\item GC and file change speed optimization - we update only few DCP nodes if
+a large file is being changed or one of its nodes is being GCed;
+\item memory consumption optimization - we don't need to keep in-core
+the DCP ranges;
+\item DCP composing optimization - DCP entries are sorted in DCP and
+it is much faster to sort short arrays rather then long.
\end{itemize}
-Thus, larger files have more associated DCP nodes. It stands for reason
-that when file is changed or GC moves any node of the file, the
-corresponding DCP node becomes obsolete and ought to be rewritten. But
-only those data checkpoints are rewritten that correspond to the changed
-file range.
-
-With the help of DCP JFFS3 need only keep in-core an array (or more
-precisely, a list of arrays) of DCP node references of files.
-
-The precise value of $R$ depends on the memory consumption requirements.
-The following table illustrates the dependency of the consumed memory and $R$
-for different file sizes (we assume each DCP reference takes 4 bytes).
+The data structure corresponding to DCP is:
+\begin{verbatim}
+struct jffs3_data_checkpoint
+{
+ uint16_t magic; /* Magic bitmask of DCP node. */
+ uint16_t index; /* The DCP index. Gives DCP range offset
+ if multiplied by the DCP range. */
+ uint32_t version; /* Helps to differentiate between valid
+ and obsolete data checkpoints. */
+ uint32_t hdr_crc; /* DCP Header CRC32 checksum. */
+ uint32_t crc; /* DCP CRC32 checksum. */
+
+ /* An array of references to nodes corresponding to the
+ * DCP. The array is sorted by the node range offset in
+ * ascending order. */
+ struct jffs3_dcp_entry entries[];
+}
+
+struct jffs3_dcp_entry
+{
+ uint32_t phys_offs; /* The position of the node on Flash. */
+ uint32_t offs; /* Offset of the data range the node refers. */
+ uint16_t len; /* The length of the node data range. */
+}
+\end{verbatim}
+
+The value of the DCP range $R$ depends on various aspects:
+\begin{description}
+\item[Memory consumption.] The larger is $R$, the fewer DCP references
+we should keep in-core. For example, for 128~MiB file
+we would need to keep 8192 DCP references in RAM if $R$~=~16~KiB and
+only 128 DCP references if $R$~=~1~MiB.
+
+\item[The maximal DCP node size on flash.] DCP node physical sizes shouldn't be
+neither too large nor too small. Very large DCP nodes result in slow
+DCP updates whereas very small DCP nodes imply larger space overhead and
+almost do not decrease DCP update time. It it sane to limit the DCP
+node size by one of few NAND pages. Fore a data checkpoint
+of size = 1 NAND page the possible maximal number of DCP entries $E$
+(which are 6~bytes in size) is:
\begin{center}
-\begin{tabular}{lll}
-$R$ & File size & RAM required\\
+\begin{tabular}{ll}
+NAND page size & Max. entries per DCP($E$) \\
+512 bytes & 80\\
+2048 bytes & 336\\
+4096 bytes & 667\\
+\end{tabular}
+\end{center}
+
+\item[The maximal data node range.] The larger is the maximal data node
+range, the fewer DCP entries we need to store for in a DCP for a fixed $R$.
+The following are examples of possible configurations, assuming the
+physical DCP node size is limited to the size of the NAND page.
+
+\begin{center}
+\begin{tabular}{ll}
+Max. data node range ($r$) & Max. DCP range ($R$)\\
+512 bytes per NAND page &\\
+\hline
+4 KiB & 320 KiB\\
+8 KiB & 640 KiB\\
+16 KiB & 1280 KiB\\
+2048 bytes per NAND page &\\
+\hline
+4 KiB & 1344 KiB\\
+8 KiB & 2688 KiB\\
+16 KiB & 5376 KiB\\
+4096 bytes per NAND page &\\
\hline
-512 KiB & 128 MiB & 1 MiB\\
-1 MiB & 128 MiB & 0.5 MiB\\
-4 MiB & 4 GiB & 4 MiB\\
-8 MiB & 4 GiB & 1 MiB\\
+4 KiB & 2668 KiB\\
+8 KiB & 5336 KiB\\
+16 KiB & 10672 KiB\\
\end{tabular}
\end{center}
-The $R$ value may be determined from the flash size and the desired RAM
-consumption limit and may be configurable.
+In order to make it possible to write in smaller chunks then $r$, we
+ought to have \hbox{$R > r{\cdot}E$}. Since the number of data nodes
+corresponding to an inode range is still limited, JFFS3 must start
+merging small data nodes when the number of DCP entries reaches the
+value $E$ while not the whole range $R$ is filled by data. The possible
+way to calculate $R$ is to use formula
+\hbox{$R = (r{\cdot}E)/k$}, where $k=2, 3, ...$
+\end{description}
-Since most files on the file system are small in size, we may facilitate
-faster changing of small files by making $R$ smaller for the beginning
-of files and and larger for the rest of files. I.e, 256~KiB for file
-ranges before 2~MiB and 8~MiB for ranges after 2~MiB.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
More information about the linux-mtd-cvs
mailing list