[PATCH v2 2/2] drivers: gpio: add virtio-gpio guest driver

Jason Wang jasowang at redhat.com
Mon Dec 7 21:49:02 EST 2020


On 2020/12/7 下午5:33, Enrico Weigelt, metux IT consult wrote:
> On 07.12.20 04:48, Jason Wang wrote:
>
> Hi,
>
>>>> Not a native speaker but event sounds like something driver read from
>>>> device. Looking at the below lists, most of them except for
>>>> VIRTIO_GPIO_EV_HOST_LEVEL looks more like a command.
>>> okay, shall I name it "message" ?
>>
>> It might be better.
> Okay, renamed to messages in v3.
>
>>>> #define VIRTIO_NET_OK     0
>>>> #define VIRTIO_NET_ERR    1
>>> hmm, so I'd need to define all the error codes that possibly could
>>> happen ?
>>
>> Yes, I think you need.
> Okay, going to do it in the next version.
>
>>>> If I read the code correctly, this expects there will be at most a
>>>> single type of event that can be processed at the same time. E.g can
>>>> upper layer want to read from different lines in parallel? If yes, we
>>>> need to deal with that.
>>> @Linus @Bartosz: can that happen or does gpio subsys already serialize
>>> requests ?
>>>
>>> Initially, I tried to protect it by spinlock (so, only one request may
>>> run at a time, other calls just wait until the first is finished), but
>>> it crashed when gpio cdev registration calls into the driver (fetches
>>> the status) while still in bootup.
>>>
>>> Don't recall the exact error anymore, but something like an
>>> inconsistency in the spinlock calls.
>>>
>>> Did I just use the wrong type of lock ?
>> I'm not sure since I am not familiar with GPIO. But a question is, if at
>> most one request is allowed, I'm not sure virtio is the best choice here
>> since we don't even need a queue(virtqueue) here.
> I guess, I should add locks to the gpio callback functions (where gpio
> subsys calls in). That way, requests are requests are strictly ordered.
> The locks didn't work in my previous attempts, but probably because I've
> missed to set the can_sleep flag (now fixed in v3).
>
> The gpio ops are already waiting for reply of the corresponding type, so
> the only bad thing that could happen is the same operation being called
> twice (when coming from different threads) and replies mixed up between
> first and second one. OTOH I don't see much problem w/ that. This can be
> fixed by adding a global lock.
>
>> I think it's still about whether or not we need allow a batch of
>> requests via a queue. Consider you've submitted two request A and B, and
>> if B is done first, current code won't work. This is because, the reply
>> is transported via rxq buffers not just reuse the txq buffer if I read
>> the code correctly.
> Meanwhile I've changed it to allocate a new rx buffer for the reply
> (done right before the request is sent), so everything should be
> processed in the order it had been sent. Assuming virtio keeps the
> order of the buffers in the queues.


Unfortunately, there's no guarantee that virtio will keep the order or 
it needs to advertise VIRTIO_F_IN_ORDER. (see 2.6.9 in the virtio spec).

Btw, if possible, it's better to add a link to the userspace code here.


>
>>> Could you please give an example how bi-directional transmission within
>>> the same queue could look like ?
>> You can check how virtio-blk did this in:
>>
>> https://docs.oasis-open.org/virtio/virtio/v1.1/csprd01/virtio-v1.1-csprd01.html#x1-2500006
> hmm, still don't see how the code would actually look like. (in qemu as
> well as kernel). Just add the fetched inbuf as an outbuf (within the
> same queue) ?


Yes, virtio allows adding IN buffers after OUT buffer through descriptor 
chaining.

Thanks


>
>>> Maybe add one new buffer per request and one new per received async
>>> signal ?
>> It would be safe to fill the whole rxq and do the refill e.g when half
>> of the queue is used.
> Okay, doing that now in v3: there's always at least one rx buffer,
> and requests as well as the intr receiver always add a new one.
> (they get removed on fetching, IMHO).
>
>
> --mtx
>




More information about the linux-riscv mailing list