Data corruption when using multiple devices with NVMEoF TCP
Sagi Grimberg
sagi at grimberg.me
Mon Jan 11 05:11:01 EST 2021
> Hey Sagi,
Hey Hao,
> I exported 4 devices to the initiator, created a raid-0 array, and
> copied a 98G directory with many ~100MB .gz files.
> With the patch you give on top of 58cf05f597b0 (fairly new), I saw
> about 24K prints from dmesg. Below are some of them:
Yes, I understand it generated tons of prints, but it seems that
something is strange here.
> [ 3775.256547] nvme_tcp: rq 22 (READ) data_len 131072 bio[1/2] sector
> a388200 bvec: nsegs 19 size 77824 offset 0
This is a read request and has 2 bios, one spans 19 4K buffers (starting
from sector a388200) and the second probably spans 13 4K buffers. The
host is asking the target to send 128K (data_len 131072), but I don't
see anywhere that the host is receiving the residual of the data
transfer..
Should be in the form of:
nvme_tcp: rq 22 (READ) data_len 131072 bio[2/2] sector 0xa388298 bvec:
nsegs 13 size 53248 offset 0
In your entire log, do you see any (READ) print that spans bio that
is not [1/x]? e.g. a read that spans other bios in the request (like
[2/2], [2/3], etc..)?
> [ 3775.256768] nvme_tcp: rq 19 (READ) data_len 131072 bio[1/2] sector
> a388300 bvec: nsegs 19 size 77824 offset 0
> [ 3775.256774] nvme_tcp: rq 20 (READ) data_len 131072 bio[1/2] sector
> a388400 bvec: nsegs 19 size 77824 offset 0
> [ 3775.256787] nvme_tcp: rq 5 (READ) data_len 131072 bio[1/2] sector
> a388300 bvec: nsegs 19 size 77824 offset 0
> [ 3775.256791] nvme_tcp: rq 6 (READ) data_len 131072 bio[1/2] sector
> a388400 bvec: nsegs 19 size 77824 offset 0
> [ 3775.256794] nvme_tcp: rq 117 (READ) data_len 131072 bio[1/2] sector
> a388300 bvec: nsegs 19 size 77824 offset 0
> [ 3775.256797] nvme_tcp: rq 118 (READ) data_len 131072 bio[1/2] sector
> a388400 bvec: nsegs 19 size 77824 offset 0
> [ 3775.256800] nvme_tcp: rq 5 (READ) data_len 262144 bio[1/4] sector
> a388300 bvec: nsegs 19 size 77824 offset 0
> [ 3775.257002] nvme_tcp: rq 21 (READ) data_len 131072 bio[1/2] sector
> a388500 bvec: nsegs 19 size 77824 offset 0
> [ 3775.257006] nvme_tcp: rq 22 (READ) data_len 131072 bio[1/2] sector
> a388600 bvec: nsegs 19 size 77824 offset 0
> [ 3775.257009] nvme_tcp: rq 7 (READ) data_len 131072 bio[1/2] sector
> a388500 bvec: nsegs 19 size 77824 offset 0
> [ 3775.257012] nvme_tcp: rq 8 (READ) data_len 131072 bio[1/2] sector
> a388600 bvec: nsegs 19 size 77824 offset 0
> [ 3775.257014] nvme_tcp: rq 7 (READ) data_len 131072 bio[1/2] sector
> a388500 bvec: nsegs 19 size 77824 offset 0
> [ 3775.257017] nvme_tcp: rq 8 (READ) data_len 131072 bio[1/2] sector
> a388600 bvec: nsegs 19 size 77824 offset 0
> [ 3775.257020] nvme_tcp: rq 6 (READ) data_len 262144 bio[1/4] sector
> a388500 bvec: nsegs 19 size 77824 offset 0
> [ 3775.262587] nvme_tcp: rq 22 (WRITE) data_len 131072 bio[1/2] sector
> a388200 bvec: nsegs 19 size 77824 offset 0
> [ 3775.262600] nvme_tcp: rq 22 (WRITE) data_len 131072 bio[2/2] sector
> a388298 bvec: nsegs 13 size 53248 offset 0
For (WRITE) request we see the desired sequence, we first write the
content of the first bio (19 4K segments) and then the content of the
second bio (13 4K segments).
> [ 3775.262610] nvme_tcp: rq 5 (WRITE) data_len 262144 bio[1/4] sector
> a388300 bvec: nsegs 19 size 77824 offset 0
> [ 3775.262617] nvme_tcp: rq 5 (WRITE) data_len 262144 bio[2/4] sector
> a388398 bvec: nsegs 13 size 53248 offset 0
> [ 3775.262623] nvme_tcp: rq 5 (WRITE) data_len 262144 bio[3/4] sector
> a388400 bvec: nsegs 19 size 77824 offset 0
> [ 3775.262629] nvme_tcp: rq 5 (WRITE) data_len 262144 bio[4/4] sector
> a388498 bvec: nsegs 13 size 53248 offset 0
Same here and on...
> [ 3775.262635] nvme_tcp: rq 6 (WRITE) data_len 262144 bio[1/4] sector
> a388500 bvec: nsegs 19 size 77824 offset 0
> [ 3775.262641] nvme_tcp: rq 6 (WRITE) data_len 262144 bio[2/4] sector
> a388598 bvec: nsegs 13 size 53248 offset 0
> [ 3775.262647] nvme_tcp: rq 6 (WRITE) data_len 262144 bio[3/4] sector
> a388600 bvec: nsegs 19 size 77824 offset 0
> [ 3775.262653] nvme_tcp: rq 6 (WRITE) data_len 262144 bio[4/4] sector
> a388698 bvec: nsegs 13 size 53248 offset 0
> [ 3775.263009] nvme_tcp: rq 5 (WRITE) data_len 131072 bio[1/2] sector
> a388300 bvec: nsegs 19 size 77824 offset 0
> [ 3775.263019] nvme_tcp: rq 5 (WRITE) data_len 131072 bio[2/2] sector
> a388398 bvec: nsegs 13 size 53248 offset 0
> [ 3775.263027] nvme_tcp: rq 6 (WRITE) data_len 131072 bio[1/2] sector
> a388400 bvec: nsegs 19 size 77824 offset 0
> [ 3775.263034] nvme_tcp: rq 6 (WRITE) data_len 131072 bio[2/2] sector
> a388498 bvec: nsegs 13 size 53248 offset 0
> [ 3775.263040] nvme_tcp: rq 7 (WRITE) data_len 131072 bio[1/2] sector
> a388500 bvec: nsegs 19 size 77824 offset 0
> [ 3775.263047] nvme_tcp: rq 7 (WRITE) data_len 131072 bio[2/2] sector
> a388598 bvec: nsegs 13 size 53248 offset 0
> [ 3775.263052] nvme_tcp: rq 8 (WRITE) data_len 131072 bio[1/2] sector
> a388600 bvec: nsegs 19 size 77824 offset 0
> [ 3775.263059] nvme_tcp: rq 8 (WRITE) data_len 131072 bio[2/2] sector
> a388698 bvec: nsegs 13 size 53248 offset 0
> [ 3775.264341] nvme_tcp: rq 19 (WRITE) data_len 131072 bio[1/2] sector
> a388300 bvec: nsegs 19 size 77824 offset 0
> [ 3775.264353] nvme_tcp: rq 19 (WRITE) data_len 131072 bio[2/2] sector
> a388398 bvec: nsegs 13 size 53248 offset 0
> [ 3775.264361] nvme_tcp: rq 20 (WRITE) data_len 131072 bio[1/2] sector
> a388400 bvec: nsegs 19 size 77824 offset 0
> [ 3775.264369] nvme_tcp: rq 20 (WRITE) data_len 131072 bio[2/2] sector
> a388498 bvec: nsegs 13 size 53248 offset 0
> [ 3775.264380] nvme_tcp: rq 21 (WRITE) data_len 131072 bio[1/2] sector
> a388500 bvec: nsegs 19 size 77824 offset 0
> [ 3775.264387] nvme_tcp: rq 21 (WRITE) data_len 131072 bio[2/2] sector
> a388598 bvec: nsegs 13 size 53248 offset 0
> [ 3775.264410] nvme_tcp: rq 22 (WRITE) data_len 131072 bio[1/2] sector
> a388600 bvec: nsegs 19 size 77824 offset 0
From the code it seems like it should do the right thing assuming
that the data does arrive, will look deeper.
Thanks for helping to dissect this issue.
More information about the Linux-nvme
mailing list