[PATCH v4 0/6] lib/base64: add generic encoder/decoder, migrate users

Andrew Morton akpm at linux-foundation.org
Fri Oct 31 21:09:47 PDT 2025


On Wed, 29 Oct 2025 18:17:25 +0800 Guan-Chun Wu <409411716 at gms.tku.edu.tw> wrote:

> This series introduces a generic Base64 encoder/decoder to the kernel
> library, eliminating duplicated implementations and delivering significant
> performance improvements.
> 
> The Base64 API has been extended to support multiple variants (Standard,
> URL-safe, and IMAP) as defined in RFC 4648 and RFC 3501. The API now takes
> a variant parameter and an option to control padding. As part of this
> series, users are migrated to the new interface while preserving their
> specific formats: fscrypt now uses BASE64_URLSAFE, Ceph uses BASE64_IMAP,
> and NVMe is updated to BASE64_STD.
> 
> On the encoder side, the implementation processes input in 3-byte blocks,
> mapping 24 bits directly to 4 output symbols. This avoids bit-by-bit
> streaming and reduces loop overhead, achieving about a 2.7x speedup compared
> to previous implementations.
> 
> On the decoder side, replace strchr() lookups with per-variant reverse tables
> and process input in 4-character groups. Each group is mapped to numeric values
> and combined into 3 bytes. Padded and unpadded forms are validated explicitly,
> rejecting invalid '=' usage and enforcing tail rules.

Looks like wonderful work, thanks.  And it's good to gain a selftest
for this code.

> This improves throughput by ~43-52x.

Well that isn't a thing we see every day.

: Decode:
:   64B   ~1530ns  ->  ~80ns    (~19.1x)
:   1KB  ~27726ns  -> ~1239ns   (~22.4x)


: Encode:
:   64B   ~90ns   -> ~32ns   (~2.8x)
:   1KB  ~1332ns  -> ~510ns  (~2.6x)
: 
: Decode:
:   64B  ~1530ns  -> ~35ns   (~43.7x)
:   1KB ~27726ns  -> ~530ns  (~52.3x)


: This change also improves performance: encoding is about 2.7x faster and
: decoding achieves 43-52x speedups compared to the previous implementation.

: This change also improves performance: encoding is about 2.7x faster and
: decoding achieves 43-52x speedups compared to the previous local
: implementation.


Do any of these callers spend a sufficient amount of time in this
encoder/decoder for the above improvements to be observable/useful?


I'll add the series to mm.git's mm-nonmm-unstable branch to give it
linux-next exposure.  I ask the NVMe, ceph and fscrypt teams to check
the code and give it a test in the next few weeks, thanks.  




More information about the Linux-nvme mailing list