[BUG][5.18rc5] nvme nvme0: controller is down; will reset: CSTS=0xffffffff, PCI_STATUS=0x10
Keith Busch
kbusch at kernel.org
Wed May 4 22:19:10 PDT 2022
On Thu, May 05, 2022 at 06:58:11AM +0500, Mikhail Gavrilov wrote:
> ps 1 : mp:7.10W operational enlat:0 exlat:0 rrt:1 rrl:1
> rwt:1 rwl:1 idle_power:- active_power:-
> ps 2 : mp:5.20W operational enlat:0 exlat:0 rrt:2 rrl:2
> rwt:2 rwl:2 idle_power:- active_power:-
> ps 3 : mp:0.0620W non-operational enlat:2500 exlat:7500 rrt:3 rrl:3
> rwt:3 rwl:3 idle_power:- active_power:-
> ps 4 : mp:0.0440W non-operational enlat:10500 exlat:65000 rrt:4 rrl:4
> rwt:4 rwl:4 idle_power:- active_power:-
>
> # cat /sys/module/nvme_core/parameters/default_ps_max_latency_us
> 100000
>
> I concluded that my problem is not related to APST because 2500 + 7500
> + 10500 + 65000 = 85500 < 100000
> 100000 is greater than the total latency of any state (enlat + xlat).
>
> Or am I misinterpreting the results?
I think you did misinterpret the results. The max latency just says which power
state is the deepest it will request APST, and your controller's reported
values will allow the deepest low power state your controller supports, which
is known to cause problems with some platform/controller combinations.
The troubleshooting steps for your observation is to:
1. Turn off APST (nvme_core.default_ps_max_latency_us=0)
2. Turn off APSM (pcie_aspm=off)
3. Turn off both
Typically one of those resolves the issue.
More information about the Linux-nvme
mailing list