[PATCH 0/1] nvmet: add basic in-memory backend support
Chaitanya Kulkarni
chaitanyak at nvidia.com
Wed Nov 5 17:02:08 PST 2025
Hannes and Christoph,
On 11/5/25 05:14, hch at lst.de wrote:
> But what is the use that requires removing all that overhead / indirection?
>
> I think you need to describe that very clearly to make a case. And
> maybe drop a lot of the marketing sounding overly dramatatic language
> that really does not help the case.
Here is the quantitative data that proves removing all
overhead/indirection gives better performance for nvmet-mem-backend
against null_blk membacked and brd. (see [1] for raw data).
#####################################################################
SUMMARY
#####################################################################
Note: All values are averages of 3 test iterations per category.
=====================================================================
Summary of Dataset 1: perf-results-20251105-102906 (48 FIO Jobs, 5GB)
===================================================================== *
nvmet-mem Performance vs null_blk:
-------------------------------------------------------------------
Metric Test null_blk nvmet-mem nvmet-mem % +/-
------------------------------------------------------------------- IOPS
randread 559,828.91 638,364.83 +14.03% randwrite 563,724.18 617,446.95
+9.53% BW (MiB/s) randread 2,186.83 2,493.61 +14.03% randwrite 2,202.05
2,411.90 +9.53% * nvmet-mem Performance vs BRD:
-------------------------------------------------------------------
Metric Test BRD nvmet-mem nvmet-mem % +/-
------------------------------------------------------------------- IOPS
randread 572,101.88 638,364.83 +11.58% randwrite 590,480.73 617,446.95
+4.57% BW (MiB/s) randread 2,234.77 2,493.61 +11.58% randwrite 2,306.57
2,411.90 +4.57%
=====================================================================
Summary of Dataset 2: perf-results-20251105-120239 (48 FIO Jobs, 5GB)
=====================================================================
*nvmet-mem Performance vs null_blk:
-------------------------------------------------------------------
Metric Test null_blk nvmet-mem % nvmet-mem +/-
------------------------------------------------------------------- IOPS
randread 556,310.23 612,604.62 +10.12% randwrite 558,665.03 609,816.62
+9.16% BW (MiB/s) randread 2,173.09 2,392.99 +10.12% randwrite 2,182.29
2,382.10 +9.16% * nvmet-mem Performance vs BRD:
-------------------------------------------------------------------
Metric Test BRD nvmet-mem % nvmet-mem +/-
------------------------------------------------------------------- IOPS
randread 576,684.10 612,604.62 +6.23% randwrite 564,228.76 609,816.62
+8.08% BW (MiB/s) randread 2,252.67 2,392.99 +6.23% randwrite 2,204.02
2,382.10 +8.08%
=====================================================================
Summary of Dataset 3: perf-results-20251105-160213 (48 FIO Jobs, 5GB)
===================================================================== *
nvmet-mem Performance vs null_blk:
--------------------------------------------------------------------
Metric Test null_blk nvmet-mem nvmet-mem % +/-
--------------------------------------------------------------------
IOPS randread 556,333.33 619,666.67 +11.38% randwrite 561,333.33
623,000.00 +10.99% BW (MiB/s) randread 2,173.00 2,420.33 +11.38%
randwrite 2,192.00 2,432.33 +10.96% * nvmet-mem Performance vs BRD:
--------------------------------------------------------------------
Metric Test BRD nvmet-mem nvmet-mem % +/-
--------------------------------------------------------------------
IOPS randread 572,666.67 619,666.67 +8.21% randwrite 591,333.33
623,000.00 +5.36% BW (MiB/s) randread 2,237.00 2,420.33 +8.20% randwrite
2,310.00 2,432.33 +5.30%May I please know if this is acceptable ? if so
I'll update the commit along with other review comments... -ck [1]
=====================================================================
Performance Comparison Tables: nvmet-mem vs null_blk vs BRD
=====================================================================
Test Configuration: Namespace Size: 5GB FIO Jobs: 48 IO Depth:
64 Test Iterations: 3 per category (results shown are averages)
Tests: randread, randwrite Backends: null_blk (memory-backed),
nvmet-mem, BRD Machine Information: CPU: AMD Ryzen Threadripper PRO
3975WX 32-Cores (64 threads) Memory: 62 GiB Kernel:
6.17.0-rc3nvme+
=====================================================================
Test methodology: for REP in 1 2; do for BACKEND in
nulll_blk_membacked=1 nvmet-mem brd; do setup NVMeOF with
$BACKEND ns and connect for FIO_JOB fio-verify fio-randwrite
fio-randread; do for i in 1 2 3; do fio
$FIO_JOB $BACKEND_DEV capture results into
fio-res-$REP-$i.log done done done done
############################################################################
RAW DATA
############################################################################
Dataset 1: perf-results-20251105-102906 (48 FIO Jobs, 5GB)
############################################################################
Test Comparison: null_blk vs nvmet-mem (IOPS)
----------------------------------------------------------------------------
Test null_blk nvmet-mem Diff (%) Winner
----------------------------------------------------------------------------
randread 559,828.91 638,364.83 +14.03% nvmet-mem
randwrite 563,724.18 617,446.95 +9.53% nvmet-mem
Individual Iteration Values:
null_blk randread: 560,901.51 | 558,407.41 | 560,177.82 = 559,828.91
nvmet-mem randread: 595,460.64 | 653,900.97 | 665,732.88 = 638,364.83
null_blk randwrite: 563,323.97 | 562,106.23 | 565,742.33 = 563,724.18
nvmet-mem randwrite: 640,550.91 | 617,786.76 | 594,003.17 = 617,446.95
Test Comparison: BRD vs nvmet-mem (IOPS)
----------------------------------------------------------------------------
Test BRD nvmet-mem Diff (%) Winner
----------------------------------------------------------------------------
randread 572,101.88 638,364.83 +11.58% nvmet-mem
randwrite 590,480.73 617,446.95 +4.57% nvmet-mem
Individual Iteration Values:
BRD randread: 574,151.01 | 568,049.86 | 574,104.76 = 572,101.88
nvmet-mem randread: 595,460.64 | 653,900.97 | 665,732.88 = 638,364.83
BRD randwrite: 620,592.07 | 574,959.80 | 575,890.31 = 590,480.73
nvmet-mem randwrite: 640,550.91 | 617,786.76 | 594,003.17 = 617,446.95
Test Comparison: null_blk vs nvmet-mem (BW MiB/s)
----------------------------------------------------------------------------
Test null_blk nvmet-mem Diff (%) Winner
----------------------------------------------------------------------------
randread 2,186.83 2,493.61 +14.03% nvmet-mem
randwrite 2,202.05 2,411.90 +9.53% nvmet-mem
Individual Iteration Values:
null_blk randread: 2,191.02 | 2,181.28 | 2,188.19 = 2,186.83
nvmet-mem randread: 2,326.02 | 2,554.30 | 2,600.52 = 2,493.61
null_blk randwrite: 2,200.48 | 2,195.73 | 2,209.93 = 2,202.05
nvmet-mem randwrite: 2,502.15 | 2,413.23 | 2,320.32 = 2,411.90
Test Comparison: BRD vs nvmet-mem (BW MiB/s)
----------------------------------------------------------------------------
Test BRD nvmet-mem Diff (%) Winner
----------------------------------------------------------------------------
randread 2,234.77 2,493.61 +11.58% nvmet-mem
randwrite 2,306.57 2,411.90 +4.57% nvmet-mem
Individual Iteration Values:
BRD randread: 2,242.78 | 2,218.94 | 2,242.60 = 2,234.77
nvmet-mem randread: 2,326.02 | 2,554.30 | 2,600.52 = 2,493.61
BRD randwrite: 2,424.19 | 2,245.94 | 2,249.57 = 2,306.57
nvmet-mem randwrite: 2,502.15 | 2,413.23 | 2,320.32 = 2,411.90
############################################################################
Dataset 2: perf-results-20251105-120239 (48 FIO Jobs, 5GB)
############################################################################
Test Comparison: null_blk vs nvmet-mem (IOPS)
----------------------------------------------------------------------------
Test null_blk nvmet-mem Diff (%) Winner
----------------------------------------------------------------------------
randread 556,310.23 612,604.62 +10.12% nvmet-mem
randwrite 558,665.03 609,816.62 +9.16% nvmet-mem
Individual Iteration Values:
null_blk randread: 555,694.56 | 557,269.21 | 555,966.92 = 556,310.23
nvmet-mem randread: 564,060.01 | 614,484.34 | 659,269.52 = 612,604.62
null_blk randwrite: 561,266.97 | 557,400.91 | 557,327.21 = 558,665.03
nvmet-mem randwrite: 629,159.63 | 577,127.87 | 623,162.36 = 609,816.62
Test Comparison: BRD vs nvmet-mem (IOPS)
----------------------------------------------------------------------------
Test BRD nvmet-mem Diff (%) Winner
----------------------------------------------------------------------------
randread 576,684.10 612,604.62 +6.23% nvmet-mem
randwrite 564,228.76 609,816.62 +8.08% nvmet-mem
Individual Iteration Values:
BRD randread: 559,428.70 | 612,435.76 | 558,187.85 = 576,684.10
nvmet-mem randread: 564,060.01 | 614,484.34 | 659,269.52 = 612,604.62
BRD randwrite: 562,967.74 | 565,558.21 | 564,160.34 = 564,228.76
nvmet-mem randwrite: 629,159.63 | 577,127.87 | 623,162.36 = 609,816.62
Test Comparison: null_blk vs nvmet-mem (BW MiB/s)
----------------------------------------------------------------------------
Test null_blk nvmet-mem Diff (%) Winner
----------------------------------------------------------------------------
randread 2,173.09 2,392.99 +10.12% nvmet-mem
randwrite 2,182.29 2,382.10 +9.16% nvmet-mem
Individual Iteration Values:
null_blk randread: 2,170.68 | 2,176.83 | 2,171.75 = 2,173.09
nvmet-mem randread: 2,203.36 | 2,400.33 | 2,575.27 = 2,392.99
null_blk randwrite: 2,192.45 | 2,177.35 | 2,177.06 = 2,182.29
nvmet-mem randwrite: 2,457.65 | 2,254.41 | 2,434.23 = 2,382.10
Test Comparison: BRD vs nvmet-mem (BW MiB/s)
----------------------------------------------------------------------------
Test BRD nvmet-mem Diff (%) Winner
----------------------------------------------------------------------------
randread 2,252.67 2,392.99 +6.23% nvmet-mem
randwrite 2,204.02 2,382.10 +8.08% nvmet-mem
Individual Iteration Values:
BRD randread: 2,185.27 | 2,392.33 | 2,180.42 = 2,252.67
nvmet-mem randread: 2,203.36 | 2,400.33 | 2,575.27 = 2,392.99
BRD randwrite: 2,199.09 | 2,209.21 | 2,203.75 = 2,204.02
nvmet-mem randwrite: 2,457.65 | 2,254.41 | 2,434.23 = 2,382.10
More information about the Linux-nvme
mailing list