blktests failures with v5.19-rc1

Yi Zhang yi.zhang at redhat.com
Sat Jun 11 01:34:03 PDT 2022


On Fri, Jun 10, 2022 at 10:49 PM Keith Busch <kbusch at kernel.org> wrote:
>
> On Fri, Jun 10, 2022 at 12:25:17PM +0000, Shinichiro Kawasaki wrote:
> > On Jun 10, 2022 / 09:32, Chaitanya Kulkarni wrote:
> > > >> #6: nvme/032: Failed at the first run after system reboot.
> > > >>                 Used QEMU NVME device as TEST_DEV.
> > > >>
> > >
> > > ofcourse we need to fix this issue but can you also
> > > try it with the real H/W ?
> >
> > Hi Chaitanya, thank you for looking into the failures. I have just run the test
> > case nvme/032 with real NVME device and observed the exactly same symptom as
> > QEMU NVME device.
>
> QEMU is perfectly fine for this test. There's no need to bring in "real"
> hardware for this.
>
> And I am not even sure this is real. I don't know yet why this is showing up
> only now, but this should fix it:

Hi Keith

Confirmed the WARNING issue was fixed with the change, here is the log:

# ./check nvme/032
nvme/032 => nvme0n1 (test nvme pci adapter rescan/reset/remove during
I/O) [passed]
    runtime  5.165s  ...  5.142s
nvme/032 => nvme1n1 (test nvme pci adapter rescan/reset/remove during
I/O) [passed]
    runtime  6.723s  ...  6.635s
nvme/032 => nvme2n1 (test nvme pci adapter rescan/reset/remove during
I/O) [passed]
    runtime  7.708s  ...  7.808s

[  307.477948] run blktests nvme/032 at 2022-06-11 04:27:46
[  312.603452] pcieport 0000:40:03.1: bridge window [io
0x1000-0x0fff] to [bus 42] add_size 1000
[  312.603599] pcieport 0000:40:03.1: BAR 13: no space for [io  size 0x1000]
[  312.603603] pcieport 0000:40:03.1: BAR 13: failed to assign [io  size 0x1000]
[  312.603729] pcieport 0000:40:03.1: BAR 13: no space for [io  size 0x1000]
[  312.603733] pcieport 0000:40:03.1: BAR 13: failed to assign [io  size 0x1000]
[  313.397440] run blktests nvme/032 at 2022-06-11 04:27:51
[  318.732273] nvme nvme1: Shutdown timeout set to 16 seconds
[  318.785945] nvme nvme1: 16/0/0 default/read/poll queues
[  319.268544] pci 0000:44:00.0: Removing from iommu group 33
[  319.326814] pci 0000:44:00.0: [1e0f:0007] type 00 class 0x010802
[  319.326866] pci 0000:44:00.0: reg 0x10: [mem 0xa4900000-0xa4907fff 64bit]
[  319.483234] pci 0000:44:00.0: 31.504 Gb/s available PCIe bandwidth,
limited by 8.0 GT/s PCIe x4 link at 0000:40:03.3 (capable of 63.012
Gb/s with 16.0 GT/s PCIe x4 link)
[  319.531324] pci 0000:44:00.0: Adding to iommu group 33
[  319.547381] pcieport 0000:40:03.1: bridge window [io
0x1000-0x0fff] to [bus 42] add_size 1000
[  319.547448] pcieport 0000:40:03.1: BAR 13: no space for [io  size 0x1000]
[  319.547453] pcieport 0000:40:03.1: BAR 13: failed to assign [io  size 0x1000]
[  319.547547] pcieport 0000:40:03.1: BAR 13: no space for [io  size 0x1000]
[  319.547550] pcieport 0000:40:03.1: BAR 13: failed to assign [io  size 0x1000]
[  319.547607] pci 0000:44:00.0: BAR 0: assigned [mem
0xa4900000-0xa4907fff 64bit]
[  319.556620] nvme nvme1: pci function 0000:44:00.0
[  319.838233] nvme nvme1: Shutdown timeout set to 16 seconds
[  319.911826] nvme nvme1: 16/0/0 default/read/poll queues
[  320.900025] run blktests nvme/032 at 2022-06-11 04:27:59
[  326.311357] nvme nvme2: 16/0/0 default/read/poll queues
[  327.771945] pci 0000:45:00.0: Removing from iommu group 34
[  327.839066] pci 0000:45:00.0: [8086:0b60] type 00 class 0x010802
[  327.839106] pci 0000:45:00.0: reg 0x10: [mem 0xa4800000-0xa4803fff 64bit]
[  328.011204] pci 0000:45:00.0: 31.504 Gb/s available PCIe bandwidth,
limited by 8.0 GT/s PCIe x4 link at 0000:40:03.4 (capable of 63.012
Gb/s with 16.0 GT/s PCIe x4 link)
[  328.058523] pci 0000:45:00.0: Adding to iommu group 34
[  328.072575] pcieport 0000:40:03.1: bridge window [io
0x1000-0x0fff] to [bus 42] add_size 1000
[  328.072628] pcieport 0000:40:03.1: BAR 13: no space for [io  size 0x1000]
[  328.072632] pcieport 0000:40:03.1: BAR 13: failed to assign [io  size 0x1000]
[  328.072685] pcieport 0000:40:03.1: BAR 13: no space for [io  size 0x1000]
[  328.072688] pcieport 0000:40:03.1: BAR 13: failed to assign [io  size 0x1000]
[  328.072741] pci 0000:45:00.0: BAR 0: assigned [mem
0xa4800000-0xa4803fff 64bit]
[  328.079857] nvme nvme2: pci function 0000:45:00.0
[  328.153256] nvme nvme2: 16/0/0 default/read/poll queues
>
> ---
> diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c
> index fc804e08e3cb..bebd816c11e6 100644
> --- a/drivers/pci/pci-sysfs.c
> +++ b/drivers/pci/pci-sysfs.c
> @@ -476,7 +476,7 @@ static ssize_t dev_rescan_store(struct device *dev,
>         }
>         return count;
>  }
> -static struct device_attribute dev_attr_dev_rescan = __ATTR(rescan, 0200, NULL,
> +static struct device_attribute dev_attr_dev_rescan = __ATTR_IGNORE_LOCKDEP(rescan, 0200, NULL,
>                                                             dev_rescan_store);
>
>  static ssize_t remove_store(struct device *dev, struct device_attribute *attr,
> --
>


-- 
Best Regards,
  Yi Zhang




More information about the Linux-nvme mailing list