fix atomic limits check
alan.adamson at oracle.com
alan.adamson at oracle.com
Fri Jun 13 14:22:00 PDT 2025
On 6/10/25 10:54 PM, Christoph Hellwig wrote:
> Hi all,
>
> this series tries to fix the atomics limit check to limit it to
> the per-controller values and to the controller probing.
>
> I think this should solve the root cause of the report from Yi Zhang,
> but needs new verification.
>
> Diffstat:
> core.c | 84 ++++++++++++++++++++++++++++++-----------------------------------
> nvme.h | 3 --
> 2 files changed, 40 insertions(+), 47 deletions(-)
Some testing with my qemu-nvme atomic write setup. My qemu includes ns
atomic parameters that isn't upstream yet.
-device nvme-subsys,id=subsys0 \
-device
nvme,serial=deadbeef,id=nvme0,subsys=subsys0,atomic.dn=off,atomic.awun=31,atomic.awupf=15
\
-drive id=ns1,file=/dev/nullb1,if=none \
-device nvme-ns,drive=ns1,bus=nvme0,nsid=1,zoned=false,shared=false \
-device
nvme,serial=deadbeef,id=nvme1,subsys=subsys0,atomic.dn=off,atomic.awun=63,atomic.awupf=31
\
-drive id=ns2,file=/dev/nullb2,if=none \
-device nvme-ns,drive=ns2,bus=nvme1,nsid=2,zoned=false,shared=false \
-device
nvme,serial=deadbeef,id=nvme2,subsys=subsys0,atomic.dn=off,atomic.awun=15,atomic.awupf=7
\
-drive id=ns3,file=/dev/nullb3,if=none \
-device
nvme-ns,drive=ns3,bus=nvme2,nsid=3,zoned=false,shared=false,atomic.nawun=63,atomic.nawupf=31,atomic.nsfeat=true
\
-device
nvme,serial=deadbeef,id=nvme3,subsys=subsys0,atomic.dn=off,atomic.awun=15,atomic.awupf=7
\
-drive id=ns4,file=/dev/nullb4,if=none \
-device
nvme-ns,drive=ns4,bus=nvme3,nsid=4,zoned=false,shared=false,atomic.nawun=31,atomic.nawupf=15,atomic.nsfeat=true
\
-drive id=ns5,file=/dev/nullb5,if=none \
-device
nvme-ns,drive=ns5,bus=nvme3,nsid=5,zoned=false,shared=false,atomic.nawun=127,atomic.nawupf=63,atomic.nsfeat=true
\
--nographic
A single subsystem with 4 controllers, 1 of the controllers has 2
namespaces.
[root at localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 40G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 39G 0 part
└─ol-root 252:0 0 39G 0 lvm /
sr0 11:0 1 1024M 0 rom
nvme1n1 259:1 0 250G 0 disk
nvme1n2 259:5 0 250G 0 disk
nvme1n3 259:6 0 250G 0 disk
nvme1n4 259:7 0 250G 0 disk
nvme1n5 259:9 0 250G 0 disk
[root at localhost ~]# cat testxx.sh
set -x
nvme id-ctrl /dev/$1 | grep cmic
nvme id-ctrl /dev/$1 | grep awupf
nvme id-ns /dev/$1 | grep nawupf
cat /sys/block/$1/queue/atomic_write_max_bytes
[root at localhost ~]# sh testxx.sh nvme1n1
+ nvme id-ctrl /dev/nvme1n1
+ grep cmic
cmic : 0x2
+ nvme id-ctrl /dev/nvme1n1
+ grep awupf
awupf : 31
+ nvme id-ns /dev/nvme1n1
+ grep nawupf
nawupf : 0
+ cat /sys/block/nvme1n1/queue/atomic_write_max_bytes
4096
[root at localhost ~]# sh testxx.sh nvme1n2
+ nvme id-ctrl /dev/nvme1n2
+ grep cmic
cmic : 0x2
+ nvme id-ctrl /dev/nvme1n2
+ grep awupf
awupf : 15
+ nvme id-ns /dev/nvme1n2
+ grep nawupf
nawupf : 0
+ cat /sys/block/nvme1n2/queue/atomic_write_max_bytes
4096
[root at localhost ~]# sh testxx.sh nvme1n3
+ nvme id-ctrl /dev/nvme1n3
+ grep cmic
cmic : 0x2
+ nvme id-ctrl /dev/nvme1n3
+ grep awupf
awupf : 7
+ nvme id-ns /dev/nvme1n3
+ grep nawupf
nawupf : 15
+ cat /sys/block/nvme1n3/queue/atomic_write_max_bytes
8192
[root at localhost ~]# sh testxx.sh nvme1n4
+ nvme id-ctrl /dev/nvme1n4
+ grep cmic
cmic : 0x2
+ nvme id-ctrl /dev/nvme1n4
+ grep awupf
awupf : 7
+ nvme id-ns /dev/nvme1n4
+ grep nawupf
nawupf : 63
+ cat /sys/block/nvme1n4/queue/atomic_write_max_bytes
32768
[root at localhost ~]# sh testxx.sh nvme1n5
+ nvme id-ctrl /dev/nvme1n5
+ grep cmic
cmic : 0x2
+ nvme id-ctrl /dev/nvme1n5
+ grep awupf
awupf : 7
+ nvme id-ns /dev/nvme1n5
+ grep nawupf
nawupf : 31
+ cat /sys/block/nvme1n5/queue/atomic_write_max_bytes
16384
Does this look right?
Before, we had a single atomic_write_max_bytes value per subsystem.
That's not the case now.
More information about the Linux-nvme
mailing list