[RFC PATCH 0/6] Add GPT parser to MTD layer
Davidlohr Bueso
dave at stgolabs.net
Mon Dec 11 16:43:58 PST 2023
On Mon, 11 Dec 2023, Romain Gantois wrote:
>Hello everyone,
>
>MTD devices were historically partitioned using fixed partitions schemes
>defined in the kernel device tree or on the cmdline. More recently, a bunch
>of dynamic parsers have been introduced, allowing partitioning information
>to be stored in-band. However, unlike disks, parsers for MTD devices do not
>support runtime discovery of the partition format. This format is instead
>named in the device-tree using a compatible string.
>
>The GUID Partition Table is one of the most common ways of partitioning a
>block device. As of now, there is no support in the MTD layer for parsing
>GPT tables. Indeed, use cases for layouts like GPT on raw Flash devices are
>rare, and for good reason since these partitioning schemes are sensitive to
>bad blocks in strategic locations such as LBA 2. Moreover, they do not
>allow proper wear-leveling to be performed on the full span of the device.
>
>However, allowing GPT to be used on MTD devices can be practical in some
>cases. In the context of an A/B OTA upgrade that can act on either NOR of
>eMMC devices, having the same partition table format for both kinds of
>devices can simplify the task of the update software.
>
>This series adds a fully working MTD GPT parser to the kernel. Use of the
>parser is restricted to NOR flash devices, since NAND flashes are too
>susceptible to bad blocks. To ensure coherence and code-reuse between
>subsystems, I've factored device-agnostic code from the block layer GPT
>parser and moved it to a new generic library in lib/gpt.c. No functional
>change is intended in the block layer parser.
>
>I understand that this can seem like a strange feature for MTD devices, but
>with the restriction to NOR devices, the partition table can be fairly
>reliable. Moreover, this addition fits nicely into the MTD parser model.
>Please tell me what you think.
I am not a fan of this. The usecase seems very hacky and ad-hoc to justify
decoupling from the block layer, not to mention move complexity out of
userspace and into the kernel (new parser) for something that is already
being done/worked around. Also, what other user would consume this new gpt
lib abstraction in the future? I don't think it is worth it.
Thanks,
Davidlohr
More information about the linux-mtd
mailing list