UBI/UBIFS: dealing with MLC's paired pages

Artem Bityutskiy dedekind1 at gmail.com
Fri Oct 30 03:09:25 PDT 2015


On Fri, 2015-10-30 at 10:45 +0100, Boris Brezillon wrote:
> On Fri, 30 Oct 2015 11:08:10 +0200
> Artem Bityutskiy <dedekind1 at gmail.com> wrote:
> 
> > On Fri, 2015-10-30 at 09:15 +0100, Boris Brezillon wrote:
> > > Hi Artem,
> > > 
> > > Don't take the following answer as a try to teach you how
> > > UBI/UBIFS
> > > work
> > > or should work with MLC NANDs. I still listen to your
> > > suggestions,
> > > but
> > > when I had a look at how this "skip pages on demand" approach
> > > could
> > > be implemented I realized it was not so simple.
> > 
> > Sure.
> > 
> > Could you verify my understanding please.
> > 
> > You realized that "skip on demand" is not easy, and you suggest
> > that we
> > simply write all the data twice - first time we skip pages, and
> > then we
> > garbage collect everything. At the end, roughly speaking, we trade
> > off
> > half of the IO speed, power, and NAND lifetime.

So I guess the answer is generally "yes", right? I just want to be
clear about the trade-off.

> That will be pretty much the same with the "skip on demand" approach,
> because you'll probably loose a lot of space when syncing the wbuf.

Write buffer is designed to optimized space usage. Instead of wasting
the rest of the NAND page, we wait for more data to arrive and put it
to the same NAND page with the previous piece of data.

This suggests that we do not sync it too often, or at least that the
efforts were taken not to do this.

Off the top of my head, we sync the write-buffer (AKA wbuf) in these
cases:
1. Journal commit, which happens once in a while, depends on journal
size.
2. User-initiated sync, like fsync(), sync(), remount, etc.
3. Write-buffer timer, which fires when there were no writes withing
certain interval, like 5 seconds. The time can be tuned.
4. Other situations like the end of GC, etc - these are related to meta
-data management.

Now, imagine you writing a lot of data, like uncompressing a big
tarball, or compressing, or just backing up your /home. In this
situation you have a continuous flow of data from VFS to UBIFS.

UBIFS will keep writing the data to the journal, and there won't be any
wbuf syncs. The syncs will happen only on journal commit. So you end up
with LEBs full of data and not requiring any GC.

But yes, if we are talking about, say, an idle system, which
occasionally writes something, there will be a wbuf sync after every
write.

So in the "I need all your capacity" kind of situations where IO speed
matters, and there are a lot of data written - we'd be optimal, no
double writes.

In the "I am mostly idle" type of situations we'll do double writes.

SIGLUNCH, colleagues waiting, sorry, I guess I wrote enough :-)

> A given LEB can only be in secure or unsecure mode, but a UBI volume
> can expose both unsecure and secure LEBs, and those LEBs have
> different
> sizes.
> The secure/unsecure mode is chosen when mapping the LEB, and the LEB
> stays in this mode until it's unmapped.

This is not going to be a little value add to UBI, this is going to be 
a big change in my opinion. If UBIFS ends up using this - may worth the
effort. Otherwise, I'd argue that this would need an important customer
to be worth the effort.




More information about the linux-mtd mailing list