nvme-pci: Disabling device after reset failure: -5 occurs while AER recovery

Keith Busch kbusch at kernel.org
Tue Mar 14 10:26:20 PDT 2023


On Tue, Mar 14, 2023 at 11:11:27AM -0500, Bjorn Helgaas wrote:
> On Mon, Mar 13, 2023 at 05:57:43PM -0700, Tushar Dave wrote:
> > On 3/11/23 00:22, Lukas Wunner wrote:
> > > On Fri, Mar 10, 2023 at 05:45:48PM -0800, Tushar Dave wrote:
> > > > On 3/10/2023 3:53 PM, Bjorn Helgaas wrote:
> > > > > In the log below, pciehp obviously is enabled; should I infer that in
> > > > > the log above, it is not?
> > > > 
> > > > pciehp is enabled all the time. In the log above and below.
> > > > I do not have answer yet why pciehp shows-up only in some tests (due to DPC
> > > > link down/up) and not in others like you noticed in both the logs.
> > > 
> > > Maybe some of the switch Downstream Ports are hotplug-capable and
> > > some are not?  (Check the Slot Implemented bit in the PCI Express
> > > Capabilities Register as well as the Hot-Plug Capable bit in the
> > > Slot Capabilities Register.)
> > > ...
> 
> > > > > Generally we've avoided handling a device reset as a
> > > > > remove/add event because upper layers can't deal well with
> > > > > that.  But in the log below it looks like pciehp *did* treat
> > > > > the DPC containment as a remove/add, which of course involves
> > > > > configuring the "new" device and its MPS settings.
> > > > 
> > > > yes and that puzzled me why? especially when"Link Down/Up
> > > > ignored (recovered by DPC)". Do we still have race somewhere, I
> > > > am not sure.
> > > 
> > > You're seeing the expected behavior.  pciehp ignores DLLSC events
> > > caused by DPC, but then double-checks that DPC recovery succeeded.
> > > If it didn't, it would be a bug not to bring down the slot.  So
> > > pciehp does exactly that.  See this code snippet in
> > > pciehp_ignore_dpc_link_change():
> > > 
> > > 	/*
> > > 	 * If the link is unexpectedly down after successful recovery,
> > > 	 * the corresponding link change may have been ignored above.
> > > 	 * Synthesize it to ensure that it is acted on.
> > > 	 */
> > > 	down_read_nested(&ctrl->reset_lock, ctrl->depth);
> > > 	if (!pciehp_check_link_active(ctrl))
> > > 		pciehp_request(ctrl, PCI_EXP_SLTSTA_DLLSC);
> > > 	up_read(&ctrl->reset_lock);
> > > 
> > > So on hotplug-capable ports, pciehp is able to mop up the mess
> > > created by fiddling with the MPS settings behind the kernel's
> > > back.
> > 
> > That's the thing, even on hotplug-capable slot I do not see pciehp
> > _all_ the time. Sometime pciehp get involve and takes care of things
> > (like I mentioned in the previous thread) and other times no pciehp
> > engagement at all!
> 
> Possibly a timing issue, so I'll be interested to see if 53b54ad074de
> ("PCI/DPC: Await readiness of secondary bus after reset") makes any
> difference.  Lukas didn't mention that, so maybe it's a red herring,
> but I'm still curious since it explicitly mentions the DPC reset case
> that you're exercising here.

Catching the PDC event may be timing related. pciehp ignores the link events
during a DPC event, but it always reacts to PDC since it's indistinguishable
from a DPC occuring in response to a surprise removal, and these slots probably
don't have out-of-band presence detection.



More information about the Linux-nvme mailing list