Many small files instead of one large files - writing, wearing, mount-time?
Martin Egholm Nielsen
martin at egholm-nielsen.dk
Wed Mar 9 06:11:06 EST 2005
>>Hence, me initial strategy was to have a file in NAND for each resource.
>>However, I noticed that mount-time increased "severely" when many files
>>were put on the device, and doing an "ls" first time on the
>>device/directory took lots of time as well.
> Owing to its design JFFS2 works extremely slowly wit directories
> containing so many files.
From IRC - just to keep the ML thread up to date:
egholm: But could I make it faster by putting them into sub-directories?
dedekind: you could if the number of your subdirectories is small
dedekind: besically, JFFS2 uses the list to keep all the directory's
children
dedekind: so, the performance is linear dependend on the number of children
egholm: number of childrens - in one layer only? or accumulated children?
dedekind: in one layer
egholm: super! Then that may be a solution! Thanx
// Martin
>>Unfortunately low mount-time is one of the factors giving the user a
>>good experience with the system, so I started considering another
>>strategy - namely one large file to hold all these states.
>>
>>However, I'm a bit concerned how fopen( ..., "rw" ) is handled
>>underneith when I flush/sync the filedescriptor if I only mess with a
>>small part of the file. Is the entire file flushed to NAND once more, or
>>does Linux+JFFS2 handle this, and only write the parts (inodes) that are
>>affected...
>
> Don't worry, Only that "messed" peace will be flushed. The "large file"
> solution will be definitely faster.
>
More information about the linux-mtd
mailing list