Data corruption when using multiple devices with NVMEoF TCP
Hao Wang
pkuwangh at gmail.com
Wed Dec 23 17:23:44 EST 2020
Sure. I will try to build a new kernel.
This is in an enterprise environment, so it's just really convenient
for me to run with v5.2 or v5.6. For newer kernel, I will have to
build myself. But I will give it a try.
Regarding max_sectors_kb, there seems something interesting:
So on the target side, I see:
# cat /sys/block/nvme1n1/queue/max_sectors_kb
256
#/sys/block/nvme2n1/queue/max_sectors_kb
256
On the initiator side,
* first, there is both /sys/block/nvme1c1n1 and /sys/block/nvme1n1
* and their max_sectors_kb is 1280.
Then when I create a raid-0 volume with mdadm,
# /sys/block/md5
128
I'm not an expert on storage, but do you see any potential problem here?
Hao
On Wed, Dec 23, 2020 at 1:23 PM Sagi Grimberg <sagi at grimberg.me> wrote:
>
>
> > Wouldn't testing with a not completely outdated kernel a better first
> > step?
>
> Right, didn't notice that. Hao, would it be possible to test this
> happens with the latest upstream kernel (or something close to that)?
More information about the Linux-nvme
mailing list