[LSF/MM/BPF TOPIC] NVMe over MPTCP: Multi-Fold Acceleration for NVMe over TCP in Multi-NIC Environments
Geliang Tang
geliang at kernel.org
Wed Mar 4 20:30:06 PST 2026
Hi Nilay, Ming,
Thank you again for your interest in NVMe over MPTCP.
On Thu, 2026-02-26 at 17:54 +0800, Geliang Tang wrote:
> Hi Nilay,
>
> Thanks for your reply.
>
> On Wed, 2026-02-25 at 20:37 +0530, Nilay Shroff wrote:
> >
> >
> > On 1/29/26 9:43 AM, Geliang Tang wrote:
> > > 3. Performance Benefits
> > >
> > > This new feature has been evaluated in different environments:
> > >
> > > I conducted 'NVMe over MPTCP' tests between two PCs, each
> > > equipped
> > > with
> > > two Gigabit NICs and directly connected via Ethernet cables.
> > > Using
> > > 'NVMe over TCP', the fio benchmark showed a speed of
> > > approximately
> > > 100
> > > MiB/s. In contrast, 'NVMe over MPTCP' achieved about 200 MiB/s
> > > with
> > > fio, doubling the throughput.
> > >
> > > In a virtual machine test environment simulating four NICs on
> > > both
> > > sides, 'NVMe over MPTCP' delivered bandwidth up to four times
> > > that
> > > of
> > > standard TCP.
> >
> > This is interesting. Did you try using an NVMe multipath iopolicy
> > other
> > than the default numa policy? Assuming both the host and target are
> > multihomed,
> > configuring round-robin or queue-depth may provide performance
> > comparable
> > to what you are seeing with MPTCP.
> >
> > I think MPTCP shall distribute traffic using transport-level
> > metrics
> > such as
> > RTT, cwnd, and packet loss, whereas the NVMe multipath layer makes
> > decisions
> > based on ANA state, queue depth, and NUMA locality. In a setup with
> > multiple
> > active paths, switching the iopolicy from numa to round-robin or
> > queue-depth
> > could improve load distribution across controllers and thus improve
> > performance.
> >
> > IMO, it would be useful to test with those policies and compare the
> > results
> > against the MPTCP setup.
>
> Ming Lei also made a similar comment. In my experiments, I didn't set
> the multipath iopolicy, so I was using the default numa policy. In
> the
> follow-up, I'll adjust it to round-robin or queue-depth and rerun the
> experiments. I'll share the results in this email thread.
Based on your feedback, I have added iopolicy support to the NVMe over
MPTCP selftest script (see patch 8 in [1]). We can set the iopolicy to
round-robin like this:
# ./mptcp_nvme.sh mptcp round-robin
This demonstrates that "NVMe over MPTCP" and "NVMe multipath" can work
simultaneously without conflict.
Using this test script, I compared three I/O policies: numa, round-
robin, and queue-depth. The results for fio were very similar. It's
possible that this test environment doesn't fully reflect the
differences in I/O policies. I will continue to follow up with further
tests.
Thanks,
-Geliang
[1]
NVME over MPTCP, v4
https://patchwork.kernel.org/project/mptcp/cover/cover.1772683110.git.tanggeliang@kylinos.cn/
>
> Thanks,
> -Geliang
>
> >
> > Thanks,
> > --Nilay
More information about the Linux-nvme
mailing list