[LSF/MM/BPF TOPIC] Memory fragmentation with large block sizes

Hannes Reinecke hare at suse.de
Thu Feb 19 01:54:48 PST 2026


Hi all,

I (together with the Czech Technical University) did some experiments 
trying to measure memory fragmentation with large block sizes.
Testbed used was an nvme setup talking to a nvmet storage over
the network.

Doing so raised some challenges:

- How do you _generate_ memory fragmentation? The MM subsystem is
   precisely geared up to avoid it, so you would need to come up
   with some idea how to defeat it. With the help from Willy I managed
   to come up with something, but I really would like to discuss
   what would be the best option here.
- What is acceptable memory fragmentation? Are we good enough if the
   measured fragmentation does not grow during the test runs?
- Do we have better visibility into memory fragmentation other than
   just reading /proc/buddyinfo?

And, of course, I would like to present (and discuss) the results
of the testruns done on 4k, 8k, and 16k blocksizes.

Not sure if this should be a storage or MM topic; I'll let the
lsf-pc decide.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare at suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich




More information about the Linux-nvme mailing list