[PATCH 4/4] UBI: Implement bitrot checking

Boris Brezillon boris.brezillon at free-electrons.com
Sun Apr 12 12:20:50 PDT 2015


On Sun, 12 Apr 2015 19:09:11 +0200
Richard Weinberger <richard at nod.at> wrote:

> Am 12.04.2015 um 19:01 schrieb Boris Brezillon:
> > Hi Richard,
> > 
> > After the 'coding style related'/'useless' comments, now comes a real
> > question related to the approach you've taken :-).
> > 
> > On Sun, 29 Mar 2015 14:13:17 +0200
> > Richard Weinberger <richard at nod.at> wrote:
> > 
> > [...]
> >> +
> >> +/**
> >> + * ubi_wl_trigger_bitrot_check - triggers a re-read of all physical erase
> >> + * blocks.
> >> + * @ubi: UBI device description object
> >> + */
> >> +void ubi_wl_trigger_bitrot_check(struct ubi_device *ubi)
> >> +{
> >> +	int i;
> >> +	struct ubi_wl_entry *e;
> >> +
> >> +	ubi_msg(ubi, "Running a full read check");
> >> +
> >> +	for (i = 0; i < ubi->peb_count; i++) {
> >> +		spin_lock(&ubi->wl_lock);
> >> +		e = ubi->lookuptbl[i];
> >> +		spin_unlock(&ubi->wl_lock);
> >> +		if (e) {
> >> +			atomic_inc(&ubi->bit_rot_work);
> >> +			schedule_bitrot_check(ubi, e);
> >> +		}
> >> +	}
> > 
> > Do we really need to create a ubi_work per PEB ?
> > Couldn't we create a single work being rescheduled inside the worker
> > function (after updating the ubi_wl_entry of course).
> 
> Currently the UBI worker thread handles one PEB per ubi_work. I didn't wanted
> to break that pattern. The downside of that approach is that we need more memory.
> A few KiB per run.
> 
> I'm not sure if I understood your idea. You mean that we schedule one check for
> PEB N and this work will re-schedule again a work for PEB N+1?

That's exactly what I meant.

> Using that approach we can safe memory, yes. But is it worth the hassle?

Unless I'm missing something, it should be pretty easy to implement:
adding the following lines at the end of bitrot_check_worker() should do
the trick

	if (e->pnum + 1 < ubi->peb_count) {
		wl_wrk->e = ubi->lookuptbl[e->pnum + 1];
		__schedule_ubi_work(ubi, wl_wrk);
	} else {
		atomic_dec(&ubi->bit_rot_work);
	}
	

> I'd like to avoid works which schedule again other works.
> In the current way it is clear where the work is scheduled and how much.

Yes, but the memory consumption induced by this approach can be pretty
big on modern NAND chips (on 32 bit platforms, ubi_work is 32 octets
large, and on modern NANDs you often have 4096 blocks, so a UBI device
of 4000 block is pretty common => 4000 * 32 = 125 KiB).

For standard wear leveling requests, using a ubi_work per request is
sensible since you can't know in advance which block will be queued for
wear-leveling operation next time.
In your case, you're scanning all blocks in ascending order, which
makes it a good candidate for this 'one work for all bitrot checks'
approach.



-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com



More information about the linux-mtd mailing list