[bug report] nvme removing after probe failed with pci rescan after nvme sysfs removal

Yi Zhang yi.zhang at redhat.com
Sun Sep 26 04:14:01 PDT 2021


On Fri, Sep 24, 2021 at 11:13 AM Keith Busch <kbusch at kernel.org> wrote:
>
> On Wed, Sep 22, 2021 at 09:56:47AM +0800, Yi Zhang wrote:
> > # echo 1 >/sys/bus/pci/devices/0000\:87\:00.0/remove
> > # echo 1 >/sys/bus/pci/rescan
> > # dmesg
> > [  251.864254] pci 0000:87:00.0: [144d:a808] type 00 class 0x010802
> > [  251.864286] pci 0000:87:00.0: reg 0x10: [mem 0xc8600000-0xc8603fff 64bit]
> > [  251.864337] pci 0000:87:00.0: reg 0x30: [mem 0xffff0000-0xffffffff pref]
> > [  251.889196] pci 0000:87:00.0: BAR 6: assigned [mem 0xc8600000-0xc860ffff pref]
> > [  251.889206] pci 0000:87:00.0: BAR 0: assigned [mem 0xc8610000-0xc8613fff 64bit]
> > [  251.889777] nvme nvme0: pci function 0000:87:00.0
> > [  251.889888] nvme nvme0: readl(dev->bar + NVME_REG_CSTS) == -1,
> > return - ENODEV
>
> An all 1's return almost certainly means the memory read request failed. The
> test your described usually means the target did not properly configure the
> memory range it was assigned. Is this directly attached to a root port?
>
Hi Keith
It was connected to the PCIe slot through a PCIe extender card, I
added the full dmesg here, not sure if it helps.

https://pastebin.com/QUP0Y4sT


--
Best Regards,
  Yi Zhang




More information about the Linux-nvme mailing list