[PATCH v3 2/9] nvme-fabrics: allow to queue requests for live queues

Sagi Grimberg sagi at grimberg.me
Mon Aug 24 04:06:41 EDT 2020


>>>> I still dislike any random ioctls coming in while we're still 
>>>> initializing
>>>> the controller.
>>>
>>> Agreed.
>>>
>>>>    Looking at the flow - I wouldn't want them to be allowed
>>>> until after nvme_init_identify() is complete. Especially if the 
>>>> ioctls are
>>>> doing subsystem or controller dumping or using commands that should be
>>>> capped by values set by nvme_queue_limits().   But, if we're going to
>>>> allow nvme_init_identify the admin_q needs to be unquiesced.
>>>>
>>>> So I'm still voting for the admin queue exception.
>>>
>>> And I really don't like the admin queue special case.  What is the
>>> advantage of letting user space passthrough I/O commands in at this
>>> point in time?
>>
>> We need to pass in normal I/O commands for sure in order to have a
>> robust reset and error recovery (what in general the patchset
>> addresses). What is the difference between FS I/O commands and
>> passthru I/O commands? In fact, user passthru I/O commands will never
>> execute before nvme_init_identify() because we always start
>> the I/O queues after that.
>>
>> Let's look at pci, do we have the same enforcement for passthru
>> commands? What's special about fabrics that we need to deny
>> these commands from going through?
> 
> short answer - timing/latency and much much lower chance of failure.

So are you saying there is a bug in pci that is hiding?

> I also don't think people are querying pci (local attached drivers) like 
> they are fabric attachments.

Not sure about that at all, I'd even speculate it is the other way
around...



More information about the Linux-nvme mailing list