[PATCH]enhanced wear-leveling algorithm based on erase count

Jörn Engel joern at wohnheim.fh-wedel.de
Fri Sep 9 03:33:02 EDT 2005


On Fri, 9 September 2005 12:23:18 +0800, zhao forrest wrote:
> 
> >
> >> 5 Here we use (max_erase_count - average_erase_count) instead
> >> of (max_erase_count - min_erase_count), because min_erase_count
> >> can't be got by O(1) algorithm. So I choose average_erase_count
> >> for approximation.
> >
> >This could be a problem.  If 990 out of 1000 erase blocks are
> >constantly reused and 10 stay stale, when will the delta exceed the
> >threshold?  In this scenario, your threshold is effectively increased
> >100x.
> >
> >You could use an estimate based on the lowest filled bucket instead.
> 
> You are quite right! I decide to use following condition judgement:
> 
> if (((max_erase_count - average_erase_count)*2 > WL_DELTA) ||
> ((index_of_highest_filled_bucket - index_of_lowest_filled_bucket) > 1))
> //althought we don't know the exact minimum erase count, here we do
> //know that maximum_erase_count - minimum_erase_count has execeeded
> // WL_DELTA.
> {
>    get_block_from_used_hash_table();
> }
> 
> This way it can avoid the above case you mentioned.

Would it be enough to use ((index_of_highest_filled_bucket -
index_of_lowest_filled_bucket) > 1) alone?

> >What concerns me is the danger of a "reuse block" storm.  Picking a
> >block from the used_blocks array is quite a latency hit.  If we do
> >this 10x in a row without picking a block from the free_blocks array,
> >people will complain and send bug reports.  Let's see...
> >
> >Most of the time, you pick free blocks.  So max_erase_count will
> >constantly grow and grow faster than average_erase_count.  When the
> >first block hits some magic value, you start picking used blocks
> >instead.  Doing so will leave max_erase_count unchanged and slightly
> >increase average_erase_count.  But the increase is only fractional and
> >doesn't change your condition until the fractions add up to 1.  We
> >have a storm.
> >
> >In this form, the algorithm is not acceptable.
> 
> Yes, this is really a problem. I want to improve the algorithm by
> invoking get_block_from_used_hash_table() by 50% possibility and using
> old WL algorithm by 50% possibility when condition meets.
> 
> The psudo code may look like this:
> 
> static char flag = 1;
> 
> if ((((max_erase_count - average_erase_count)*2 > WL_DELTA) ||
> ((index_of_highest_filled_bucket - index_of_lowest_filled_bucket) > 1))
> && flag) {
>    get_block_from_used_hash_table();
>    flag = 0;
> }else {
>    old_WL_algorithm();
>    flag = 1;
> }
> 
> This way we can avoid the storm and in the meantime adjust WL. It's
> true that at this time the user request(create, write, delete) will 
> progress slower than usual. But at this time point, we have to slightly
> prefer WL because the erase_count gap between erase blocks has became
> too large.

I had the same idea:

static int pick_used_block(void)
{
	static atomic_t seqno = 99;

	if ((index_of_highest_filled_bucket - index_of_lowest_filled_bucket) < 2)
		return 0;

	/* it is time to pick a used block - just not too often */
	atomic_inc(seqno);
	if (atomic_read(seqno) != 0)
		return 0;

	/* first pick after mount and every 100th pick afterwards */
	return 1;
}

Jörn

-- 
The wise man seeks everything in himself; the ignorant man tries to get
everything from somebody else.
-- unknown




More information about the linux-mtd mailing list