Good stress test for UBIFS?
david at protonic.nl
david at protonic.nl
Thu Jan 7 05:38:52 EST 2010
> On Wed, 2010-01-06 at 11:59 +0100, David Jander wrote:
>> On Monday 28 December 2009 11:40:43 am Adrian Hunter wrote:
>> >[...]
>> > Generally we test with debugging checks turned on because they will
>> spot an
>> > error the instant it happens. On the other hand, you must also test
>> the
>> > actual configuration you will deploy.
>> >
>> > There are two approaches. We use both of them. They are:
>> >
>> > 1. Set up a desktop machine with your kernel and test on nandsim.
>> This has
>> > the advantage that it can do very many more operations than a small
>> > device.
>> >
>> > You can simulate power-off-recovery by using UBIFS "failure mode".
>> Set
>> > UBIFS debugging module parameter debug_tsts to 4. There is a script
>> I
>> > have used for that below.
>>
>> Yes, but I did not consider this option, because it is a completely
>> different
>> processor architecture (little-endian vs. big-endian), also it won't
>> test the
>> hardware-driver, nor the nand-chip and interface which can potentially
>> also be
>> (part of) the problem. Here I am trying to reproduce a situation that
>> has
>> already occurred a few times in "real life", and I need to be sure it
>> won't
>> happen ever again with the latest ubi/ubifs.
>>
>> > 2. Run tests on the device. There are tests in
>> mtd-utils/tests/fs-tests
>> > but LTP's fsstress is good for stressing the file system during
>> > power-off-recovery testing.
>>
>> Thanks a lot. I will try fsstress.
>> I had already written my own test script, mimicking some suspicious
>> scenarios,
>> in the hope it would reproduce what had happend on three of our boards
>> (corrupt fs), and eventually succeded, but it took several days running.
>> Now I am re-running the test with latest UBI/UBIFS to see if it's gone.
>> Hopefully fsstress will yield results more quickly.
>
> Also, take a look at the MTD tests:
>
> http://www.linux-mtd.infradead.org/doc/general.html#L_mtd_tests
>
> E.g., by running the torture test for a week we once found a very rare
> and subtle problem in our OneNAND driver related to the DMA transfers.
> Also, it is nice to run this test for few months and see how your NAND
> HW behaves when you try to wear few of their blocks out. Also, if you
> manage to make real faulty blocks, you can test how UBI handles them.
Yes, I have done that. I saw blocks go bad, and when ubinizing, they were
marked by ubi immediately. Very cool indeed!
Best regards,
--
David Jander
More information about the linux-mtd
mailing list