Question about potential missed IPI events

Xiang W wxjstz at 126.com
Thu Nov 16 03:46:44 PST 2023


在 2023-11-16星期四的 03:15 -0800,Bo Gan写道:
> On 11/16/23 2:11 AM, Xiang W wrote:
> > 在 2023-11-16星期四的 01:44 -0800,Bo Gan写道:
> > > On 11/16/23 1:19 AM, Xiang W wrote:
> > > > 在 2023-11-16星期四的 00:41 -0800,Bo Gan写道:
> > > > > 
> > > > > Hi Xiang, Thank you so much for replying. I'd like to revise your chart a little bit:
> > > > > 
> > > > > Case 1, xchg_ulong in `ipi_process` gets reordered before ipi_clear:
> > > > > 
> > > > >      B                 A                C                 CLINT
> > > > >      |                 |                |                  |
> > > > >      | bit_set         |                |                  |
> > > > >      | send -----------|----------------|----------------> |
> > > > >      |                 | xchg           |                  |
> > > > >      |                 |                | bit_set          |
> > > > >      |                 |                | send ----------> |
> > > > >      |                 | clear----------|----------------> |
> > > > > 
> > > > > In this case, A would observe the ipi_data->ipi_type *eventually*, but when?
> > > > > A won't process C's IPI request until another IPI comes in the future, which
> > > > > might cause the IPI request to wait indefinitely. This is not an efficiency
> > > > > problem, but a correctness problem. I think the fix should be adding specific
> > > > > fence (need to reason about the correct one) between `ipi_dev->ipi_clear` and
> > > > > `atomic_raw_xchg_ulong` in `ipi_process`
> > > > Your statement is more correct.
> > > > 
> > > 
> > > For case 1, can we agree there's an issue in the current code, and it needs to
> > > be fixed?
> > Yes!
> 
> Great, and let's fix it!
> 
> > > 
> > > > > 
> > > > > 
> > > > > Case 2, xchg_ulong doesn't get reordered, however:
> > > > > 
> > > > >      B                 A                C                 CLINT
> > > > >      |                 |                |                  |
> > > > >      | bit_set         |                |                  |
> > > > >      | send -----------|----------------|----------------> |
> > > > >      |                 | xchg           |            /---> |
> > > > >      |                 | clear----------|-----------/----> |
> > > > >      |                 |                | bit_set  /       |
> > > > >      |                 |                | send ----        |
> > > > Why is there an upward-sloping arrow in your picture? Multiple cores
> > > > observing atomic operations at the same address should not be
> > > > inconsistent.
> > > > 
> > > 
> > > Let me clarify case 2 a little bit more. Let me use a downward-sloping
> > > arrow. In this case the clear sent from A took so long that it arrived
> > > at CLINT after C's send.
> > > 
> > >       B                 A                C                 CLINT
> > >       |                 |                |                  |
> > >       | bit_set ------> |                |                  |
> > >       | send -----------|----------------|----------------> |
> > >       |                 | xchg           |                  |
> > >       |                 | clear----------|----------------> |
> > >       |                 |          \     |                  |
> > >       |                 | <---------\----|-bit_set          |
> > >       |                 |            \   | send ----------> |
> > >       |                 |             \  |                  |
> > >       |                 |              --|----------------> |
> > >                                   
> > > Back to your question. Here we have two memory/IO locations, not one.
> > > First is the `ipi_data->ipi_type`, second is the msip MMIO for hart A.
> > > Essentially, can the following happen?
> > > 
> > >    i.  Assume there's proper fence to ensure xchg orders before clear in A.
> > > 
> > >    ii. From A's point of view, the xchg observes bit_set from B,
> > >        but not bit_set from C.
> > > 
> > >    iii.From CLINT's point of view, it observes send from B, then send from C,
> > >        then clear from A.
> > > 
> > > I don't know how to reason about this. The only thing I can think of is the
> > > global memory order defined by RVWMO. Can we apply the global memory order
> > > to the interaction between hart and clint?
It is not normal for the peripheral to observe a reorder. This is obvious on a
single hart because many operations have timing requirements. This should also
be true on multiple harts, otherwise an error will occur when a device is shared
by multiple harts.

Regards,
Xiang W
> > 
> > I understand what you mean. The one I proposed is caused by inconsistent core
> > speeds and has nothing to do with memory order. Do you think the memory order
> > observed by hart and clint is inconsistent?
> > 
> > How to solve this? fence should only restrict memory ordering on the same core.
> > 
> > Regards,
> > Xiang W
> 
> Yes, my issue is about memory ordering, and yes, I'm questioning the global
> memory order observed by hart and clint. What kind of rules do they follow?
> Where are the rules defined? In RISCV ISA? Or Sifive manual? If this is not
> possible, I'm happy to be lectured on the reasoning.
> 
> Bo
> 
> 



More information about the opensbi mailing list