[PATCH 0/3] UBIFS: xattr deletion rework

Stefan Agner stefan at agner.ch
Thu Apr 25 07:51:45 PDT 2019


On 05.04.2019 00:34, Richard Weinberger wrote:
> UBIFS handles xattrs in most cases just like files.
> An inode can have child entries, which are inodes for xattrs.
> If an inode with xattrs is unlinked, only the hosting inode is
> referenced in the journal and the orphan list. Upon recovery
> all xattrs will be looked up from the TNC and also deleted.
> 
> This works in theory, but not in practice. The problem is that
> in many places UBIFS internally assumes that a directory inode
> can only be deleted if the directory is empty. Since xattr
> hosting inodes are treated like directories but you can unlink
> such an inode before all xattrs are gone, this assumption is violated.
> Therefore it can happen that the garbage collector consumes a LEB
> which hosts the information about xattr inodes because the host inode
> itself got unlinked. Upon recovery UBIFS is no longer able to
> locate these inodes and the free space accounting can get confused.
> This can lead to all kind of filesystem corruptions.
> 
> The solution is to log every inode in the journal upon unlink.
> This approach has one downside, we need to lower the amount of allowed
> xattrs per inode.
> With this changes applied UBIFS still supports dozens of xattrs per
> inode.
> 
> Hunting down this issue down was anything but easy.
> I'd like to thank Toradex AG for supporing this bug hunt.
> Special thanks to Stefan Agner for his constant support and testing my
> debug patches over and over.

Thanks Richard for working on that!

I applied the patches on v5.1-rc4 and tested it using our Colibri VF61
(vf610 NAND driver).

I continuously booted and power cut the modules every ~30 seconds on 7
modules 24/7 for the last two weeks (the same test setup where we
previously saw issues, a rootfs with systemd which makes use of
xattrs...). After ~380k cumulative boots and power cuts I haven't seen
any UBI issues! So:

Tested-by: Stefan Agner <stefan at agner.ch>

For reference, this are the type of issues we saw (this was on 4.18):

[    2.271180] ubi0: default fastmap pool size: 50
[    2.285825] ubi0: default fastmap WL pool size: 25
[    2.300231] ubi0: attaching mtd3
[    2.316948] random: fast init done
[    2.391213] ubi0: attached by fastmap
[    2.403786] ubi0: fastmap pool size: 50
[    2.416361] ubi0: fastmap WL pool size: 25
[    2.440920] ubi0: attached mtd3 (name "ubi", size 126 MiB)
[    2.455232] ubi0: PEB size: 131072 bytes (128 KiB), LEB size: 126976
bytes
[    2.471059] ubi0: min./max. I/O unit sizes: 2048/2048, sub-page size
2048
[    2.486807] ubi0: VID header offset: 2048 (aligned 2048), data
offset: 4096
[    2.502916] ubi0: good PEBs: 1000, bad PEBs: 8, corrupted PEBs: 0
[    2.518260] ubi0: user volume: 1, internal volumes: 1, max. volumes
count: 128
[    2.543867] ubi0: max/mean erase counter: 597/62, WL threshold: 4096,
image sequence number: 1260750483
[    2.572852] ubi0: available PEBs: 0, total reserved PEBs: 1000, PEBs
reserved for bad PEB handling: 12
[    2.602765] ubi0: background thread "ubi_bgt0d" started, PID 66
[    2.621396] rtc-ds1307 0-0068: hctosys: unable to read the hardware
clock
[    2.640776] ALSA device list:
[    2.654551]   No soundcards found.
[    2.672739] UBIFS (ubi0:0): background thread "ubifs_bgt0_0" started,
PID 67
[    2.723749] UBIFS (ubi0:0): recovery needed
[    3.045383] UBIFS error (ubi0:0 pid 1): ubifs_get_pnode.part.4: error
-22 reading pnode at 8:43200

--
Stefan

> 
> Richard Weinberger (3):
>   ubifs: journal: Handle xattrs like files
>   ubifs: orphan: Handle xattrs like files
>   ubifs: Limit number of xattrs per inode
> 
>  fs/ubifs/dir.c     |  15 +++-
>  fs/ubifs/journal.c |  72 ++++++++++++++++---
>  fs/ubifs/misc.h    |   8 +++
>  fs/ubifs/orphan.c  | 208 ++++++++++++++++++++++++++++++++++++-----------------
>  fs/ubifs/super.c   |   2 +
>  fs/ubifs/ubifs.h   |   4 ++
>  fs/ubifs/xattr.c   |  71 ++++++++++++++++--
>  7 files changed, 294 insertions(+), 86 deletions(-)



More information about the linux-mtd mailing list