UBIFS assert failed in ubifs_set_page_dirty at 1421
Jijiagang
jijiagang at hisilicon.com
Thu Nov 6 00:28:24 PST 2014
Dears,
For this problem, we find it relates to MM migration.
If disable migration it will not happen.
The call sequence is:
alloc_contig_range (mm/page_alloc.c)
-> __alloc_contig_migrate_range (mm/page_alloc.c)
->migrate_pages (mm/migrate.c)
->try_to_unmap (mm/rmap.c)
->try_to_unmap_file (mm/rmap.c)
->try_to_unmap_one (mm/rmap.c)
->set_page_dirty (mm/page-writeback.c)
->ubifs_set_page_dirty (fs/ubifs/file.c)
The ubifs_set_page_dirty is provided in ubifs, it will be called out of ubifs.
And it just migrate file page to another page in memory, not in flash, it needn't to budget for it.
The question is:
1. Does ubifs support migrate or not?
2. Or mm not do the right thing?
Could you please give us a hand to solve it? Any reply will be appreciated. Thanks!
Best Regards.
> -----Original Message-----
> From: Artem Bityutskiy [mailto:dedekind1 at gmail.com]
> Sent: Monday, October 20, 2014 9:12 PM
> To: Caizhiyong; linux-fsdevel at vger.kernel.org; linux-mm at kvack.org
> Cc: Jijiagang; adrian.hunter at intel.com; linux-mtd at lists.infradead.org; Wanli
> (welly)
> Subject: Re: UBIFS assert failed in ubifs_set_page_dirty at 1421
>
> Hi,
>
> first of all, what is your architecture? ARM? And how easily can you reproduce
> this? And can you try a kernel newer than 3.10?
>
> And for fs-devel and mm people, here is the link to the original report:
> http://lists.infradead.org/pipermail/linux-mtd/2014-October/055930.html,
>
> On Mon, 2014-10-20 at 12:01 +0000, Caizhiyong wrote:
> > Here is part of the log, linux version 3.10:
> > cache 16240kB is below limit 16384kB for oom_score_adj 529
> > Free memory is -1820kB above reserved
> > lowmemorykiller: Killing '.networkupgrade' (6924), adj 705,
> > to free 20968kB on behalf of 'kswapd0' (543) because
> > cache 16240kB is below limit 16384kB for oom_score_adj 529
> > Free memory is -2192kB above reserved
>
> OK, no memory and OOM starts. So your system is in trouble anyway :-)
>
> > UBIFS assert failed in ubifs_set_page_dirty at 1421 (pid 543)
>
> UBIFS complain here that someone marks a page as dirty "directly", not
> through one of the UBIFS functions. And that someone is the page reclaim path.
>
> Now, I do not really know what is going on here, so I am CCing a couple of
> mailing lists, may be someone will help.
>
> Here is what I see is going on.
>
> 1. UBIFS wants to make sure that no one marks UBIFS-backed pages (and
> actually inodes too) as dirty directly. UBIFS wants everyone to ask UBIFS to mark
> a page as dirty.
>
> 2. This is because for every dirty page, UBIFS needs to reserve certain amount of
> space on the flash media, because all writes are out-of-place, even when you
> are changing an existing file.
>
> 3. There are exactly 2 places where UBIFS-backed pages may be marked as
> dirty:
>
> a) ubifs_write_end() [->wirte_end] - the file write path
> b) ubifs_page_mkwrite() [->page_mkwirte] - the file mmap() path
>
> 4. If anything calls 'ubifs_set_page_dirty()' directly (not through
> write_end()/mkwrite()), and the page was not dirty, UBIFS will complain with
> the assertion that you see.
>
> > CPU: 3 PID: 543 Comm: kswapd0 Tainted: P O 3.10.0_s40 #1
> > [<8001d8a0>] (unwind_backtrace+0x0/0x108) from [<80019f44>]
> > (show_stack+0x20/0x24) [<80019f44>] (show_stack+0x20/0x24) from
> > [<80af2ef8>] (dump_stack+0x24/0x2c) [<80af2ef8>]
> > (dump_stack+0x24/0x2c) from [<80297234>]
> > (ubifs_set_page_dirty+0x54/0x5c) [<80297234>]
> > (ubifs_set_page_dirty+0x54/0x5c) from [<800cea60>]
> > (set_page_dirty+0x50/0x78) [<800cea60>] (set_page_dirty+0x50/0x78)
> > from [<800f4be4>] (try_to_unmap_one+0x1f8/0x3d0) [<800f4be4>]
> > (try_to_unmap_one+0x1f8/0x3d0) from [<800f4f44>]
> > (try_to_unmap_file+0x9c/0x740) [<800f4f44>]
> > (try_to_unmap_file+0x9c/0x740) from [<800f5678>]
> > (try_to_unmap+0x40/0x78) [<800f5678>] (try_to_unmap+0x40/0x78) from
> > [<800d6a04>] (shrink_page_list+0x23c/0x884) [<800d6a04>]
> > (shrink_page_list+0x23c/0x884) from [<800d76c8>]
> > (shrink_inactive_list+0x21c/0x3c8)
> > [<800d76c8>] (shrink_inactive_list+0x21c/0x3c8) from [<800d7c20>]
> > (shrink_lruvec+0x3ac/0x524) [<800d7c20>] (shrink_lruvec+0x3ac/0x524)
> > from [<800d8970>] (kswapd+0x854/0xdc0) [<800d8970>]
> > (kswapd+0x854/0xdc0) from [<80051e28>] (kthread+0xc8/0xcc)
> > [<80051e28>] (kthread+0xc8/0xcc) from [<80015198>]
> > (ret_from_fork+0x14/0x20)
>
>
> So the reclaim path seems to be marking UBIFS-backed pages as dirty directly, I
> do not know why, the reclaim path is extremely complex and I am no expert
> there. But may be someone on the MM list may help.
>
> Note, this warning is not necessarily fatal. It just indicates that UBIFS sees
> something which it believes should not happen.
>
> > UBIFS assert failed in do_writepage at 936 (pid 543)
> > CPU: 1 PID: 543 Comm: kswapd0 Tainted: P O 3.10.0_s40 #1
> > [<8001d8a0>] (unwind_backtrace+0x0/0x108) from [<80019f44>]
> > (show_stack+0x20/0x24) [<80019f44>] (show_stack+0x20/0x24) from
> > [<80af2ef8>] (dump_stack+0x24/0x2c) [<80af2ef8>]
> > (dump_stack+0x24/0x2c) from [<802990b8>] (do_writepage+0x1b8/0x1c4)
> > [<802990b8>] (do_writepage+0x1b8/0x1c4) from [<802991e8>]
> > (ubifs_writepage+0x124/0x1dc) [<802991e8>]
> > (ubifs_writepage+0x124/0x1dc) from [<800d6eb8>]
> > (shrink_page_list+0x6f0/0x884) [<800d6eb8>]
> > (shrink_page_list+0x6f0/0x884) from [<800d76c8>]
> > (shrink_inactive_list+0x21c/0x3c8)
> > [<800d76c8>] (shrink_inactive_list+0x21c/0x3c8) from [<800d7c20>]
> > (shrink_lruvec+0x3ac/0x524) [<800d7c20>] (shrink_lruvec+0x3ac/0x524)
> > from [<800d8970>] (kswapd+0x854/0xdc0) [<800d8970>]
> > (kswapd+0x854/0xdc0) from [<80051e28>] (kthread+0xc8/0xcc)
> > [<80051e28>] (kthread+0xc8/0xcc) from [<80015198>]
> > (ret_from_fork+0x14/0x20)
>
> And here UBIFS sees a page being writted, but there is no budget allocated for
> it, so the write may fail with -ENOSPC (no space), which is not supposed to ever
> happen.
>
> This is not necessarily fatal either, but indicates that UBIFS's assumptions about
> how the system functions are wrong.
>
> Now the question is: is it UBIFS which has incorrect assumptions, or this is the
> Linux MM which is not doing the right thing? I do not know the answer, let's see
> if the MM list may give us a clue.
>
> Thanks!
More information about the linux-mtd
mailing list