[PATCH] nvme: Boot as soon as the boot controller has been probed
Keith Busch
kbusch at kernel.org
Sun Nov 8 21:35:44 EST 2020
On Sun, Nov 08, 2020 at 03:35:27PM -0800, Bart Van Assche wrote:
> On 11/8/20 2:31 PM, Keith Busch wrote:
> > On Sun, Nov 08, 2020 at 09:24:03AM +0100, Greg KH wrote:
> >> On Sat, Nov 07, 2020 at 08:09:03PM -0800, Bart Van Assche wrote:
> >>> The following two issues have been introduced by commit 1811977568e0
> >>> ("nvme/pci: Use async_schedule for initial reset work"):
> >>> - The boot process waits until all NVMe controllers have been probed
> >>> instead of only waiting until the boot controller has been probed.
> >>> This slows down the boot process.
> >>> - Some of the controller probing work happens asynchronously without
> >>> the device core being aware of this.
> >>>
> >>> Hence this patch that makes all probing work happen from nvme_probe()
> >>> and that tells the device core to probe multiple NVMe controllers
> >>> concurrently by setting PROBE_PREFER_ASYNCHRONOUS.
> >>>
> >>> Cc: Mikulas Patocka <mpatocka at redhat.com>
> >>> Cc: Keith Busch <keith.busch at intel.com>
> >>> Cc: Greg KH <gregkh at linuxfoundation.org>
> >>> Signed-off-by: Bart Van Assche <bvanassche at acm.org>
> >>> ---
> >>
> >> A fixes: tag?
> >
> > Why? Whether this is an improvement or not, the current code isn't
> > broken.
>
> Hi Keith,
>
> My understanding is that the device driver core generates a 'bind'
> uevent after nvme_probe() returns (see also driver_bound() in
> drivers/base/dd.c). Do you agree that emitting KOBJ_BIND while
> nvme_reset_work() is in progress triggers a race condition between
> nvme_reset_work() and udev rules that depend on the result of code in
> nvme_reset_work(), e.g. NVMe queue creation or nvme_init_identify()?
The only user space visible artifact the driver creates on bind is the
controller character device, and access to that during a reset is
handled by the driver's state machine.
More information about the Linux-nvme
mailing list