Testing ubiblock
Ezequiel Garcia
elezegarcia at gmail.com
Wed Dec 12 07:26:56 EST 2012
Hello,
Here are the numbers on ubiblock, using the posted v2 patch.
The test is mainly focused on eraseblock wearing,
but some throughput information is also available.
If you want some background on ubiblock, read here:
http://lkml.indiana.edu/hypermail/linux/kernel/1211.2/02556.html
---
The test procedure is as follows:
First an image is created for each filesystem that will be tested
(currently ext4, vfat, ubifs).
This image has some files created by 'sysbench --prepare'.
Then an emulated environment is started and for each image
the following is done:
1. Insert nandsim driver (wear=0)
2. Format mtd device with ubiformat
3. Mount device (on ubiblock or not depending on fs)
4. Run sysbench test x 3, with one file-io mode. Report throughput.
5. Unmount device and remove ubi,ubifs,ubiblock drivers.
6. Report nand wear from nandsim.
7. Remove nandsim driver
Sysbench produces very artificial workloads, but it's very simple
and useful enough to get a feeling of ubiblock behavior under
specific conditions.
The results are: (complete reports attached)
Wear: seqwr rndwr rndrw
--------------------------------------------------------------
ext4 5586 6932 5337
vfat 5554 8305 6005
ubifs-none 4965 4188 4141
ubifs-lzo 4104 4104 4104
Test time [s]: seqwr rndwr rndrw
--------------------------------------------------------------
ext4 6.5 8.8 5.4
vfat 6.5 15.9 5.4
ubifs-none 5.0 0.5 0.3
ubifs-lzo 2.5 0.3 0.2
Transfer rate: seqwr rndwr rndrw
--------------------------------------------------------------
ext4 7 M/s 450 K/s 700 K/s
vfat 7 M/s 250 K/s 500 K/s
ubifs-none 9 M/s 7 M/s 12 M/s
ubifs-lzo 18 M/s 10 M/s 16 M/s
---
Conclusions:
Despite being results obtained on a simulated nand, some conclusions
are possible.
First of all it's obvious that compression in ubifs greatly reduces
write I/O, thus reducing eraseblock wearing and improving throughput.
It's expected that implementing some kind of compression in ubiblock
would have a similar impact.
Sequential writes and random writes have a similar wearing in ubifs,
being log structured.
On the other side, block oriented filesystem, have a far worse
performance on random tests.
This is somewhat expected, given the design of ubiblock cache is
sequential-access oriented.
Given f2fs is also log-structured, we can hope to obtain some nice
numbers using it.
Interestingly, sequential writes produce a comparable wear
in ext2, vfat and ubifs-none.
Again, implementing some kind of compression in ubiblock might be
reduce eraseblock wearing numbers.
---
Thoughts?
Thanks,
Ezequiel
-------------- next part --------------
A non-text attachment was scrubbed...
Name: report_ubiblock.tar.gz
Type: application/x-gzip
Size: 3088 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-mtd/attachments/20121212/9bb0894f/attachment.gz>
More information about the linux-mtd
mailing list