[PATCH]enhanced wear-leveling algorithm based on erase count

zhao forrest zhao_fusheng at hotmail.com
Fri Sep 9 05:17:10 EDT 2005


> >
> > You are quite right! I decide to use following condition judgement:
> >
> > if (((max_erase_count - average_erase_count)*2 > WL_DELTA) ||
> > ((index_of_highest_filled_bucket - index_of_lowest_filled_bucket) > 1))
> > //althought we don't know the exact minimum erase count, here we do
> > //know that maximum_erase_count - minimum_erase_count has execeeded
> > // WL_DELTA.
> > {
> >    get_block_from_used_hash_table();
> > }
> >
> > This way it can avoid the above case you mentioned.
>
>Would it be enough to use ((index_of_highest_filled_bucket -
>index_of_lowest_filled_bucket) > 1) alone?

yes, I'll not use average erase count any more.

> >
> > This way we can avoid the storm and in the meantime adjust WL. It's
> > true that at this time the user request(create, write, delete) will
> > progress slower than usual. But at this time point, we have to slightly
> > prefer WL because the erase_count gap between erase blocks has became
> > too large.
>
>I had the same idea:
>
>static int pick_used_block(void)
>{
>	static atomic_t seqno = 99;
>
>	if ((index_of_highest_filled_bucket - index_of_lowest_filled_bucket) < 2)
>		return 0;
>
>	/* it is time to pick a used block - just not too often */
>	atomic_inc(seqno);
>	if (atomic_read(seqno) != 0)
>		return 0;
>
>	/* first pick after mount and every 100th pick afterwards */
>	return 1;
>}
>

But it'll pick a block from used_block_hash_table only with 1% possibility.
Do you think this 1% possibility is too small? How about 10% or 20%?


Thanks,
Forrest






More information about the linux-mtd mailing list