[bug report] nvme removing after probe failed with pci rescan after nvme sysfs removal

Keith Busch kbusch at kernel.org
Thu Sep 23 20:13:15 PDT 2021


On Wed, Sep 22, 2021 at 09:56:47AM +0800, Yi Zhang wrote:
> # echo 1 >/sys/bus/pci/devices/0000\:87\:00.0/remove
> # echo 1 >/sys/bus/pci/rescan
> # dmesg
> [  251.864254] pci 0000:87:00.0: [144d:a808] type 00 class 0x010802
> [  251.864286] pci 0000:87:00.0: reg 0x10: [mem 0xc8600000-0xc8603fff 64bit]
> [  251.864337] pci 0000:87:00.0: reg 0x30: [mem 0xffff0000-0xffffffff pref]
> [  251.889196] pci 0000:87:00.0: BAR 6: assigned [mem 0xc8600000-0xc860ffff pref]
> [  251.889206] pci 0000:87:00.0: BAR 0: assigned [mem 0xc8610000-0xc8613fff 64bit]
> [  251.889777] nvme nvme0: pci function 0000:87:00.0
> [  251.889888] nvme nvme0: readl(dev->bar + NVME_REG_CSTS) == -1,
> return - ENODEV

An all 1's return almost certainly means the memory read request failed. The
test your described usually means the target did not properly configure the
memory range it was assigned. Is this directly attached to a root port?



More information about the Linux-nvme mailing list