[RFC PATCH v0.2] PCI: Add support for tango PCIe host bridge

Marc Zyngier marc.zyngier at arm.com
Tue Apr 11 12:43:46 EDT 2017


On 11/04/17 17:26, Mason wrote:
> On 11/04/2017 17:49, Marc Zyngier wrote:
>> On 11/04/17 16:13, Mason wrote:
>>> On 27/03/2017 19:09, Marc Zyngier wrote:
>>>
>>>> Here's what your system looks like:
>>>>
>>>> PCI-EP -------> MSI Controller ------> INTC
>>>>          MSI                    IRQ
>>>>
>>>> A PCI MSI is always edge. No ifs, no buts. That's what it is, and nothing
>>>> else. Now, your MSI controller signals its output using a level interrupt,
>>>> since you need to whack it on the head so that it lowers its line.
>>>>
>>>> There is not a single trigger, because there is not a single interrupt.
>>>
>>> Hello Marc,
>>>
>>> I was hoping you or Thomas might help clear some confusion
>>> in my mind around IRQ domains (struct irq_domain).
>>>
>>> I have read https://www.kernel.org/doc/Documentation/IRQ-domain.txt
>>>
>>> IIUC, there should be one IRQ domain per IRQ controller.
>>>
>>> I have this MSI controller handling 256 interrupts, so I should
>>> have *one* domain for all possible MSIs. Yet the Altera driver
>>> registers *two* domains (msi_domain and inner_domain).
>>>
>>> Could I make everything work with a single IRQ domain?
>>
>> No, because you have two irqchips. One that deals with the HW, and the
>> other that deals with the MSIs how they are presented to the kernel,
>> depending on the bus (PCI or something else). The fact that it doesn't
>> really drive any HW doesn't make it irrelevant.
> 
> The example given in IRQ-domain.txt is
> 
>   Device --> IOAPIC -> Interrupt remapping Controller -> Local APIC -> CPU
> 
> with an irq_domain for each interrupt controller.

Which doesn't use the generic MSI layer the way arm/arm64 do, so that's
the wrong example.

> 
> 
> On my system I have:
> 
>   PCI-EP -> MSI controller -> System INTC -> GIC -> CPU
> 
> The driver for System INTC is drivers/irqchip/irq-tango.c
> I think it has only one domain.
> 
> For the GIC, drivers/irqchip/irq-gic.c
> I see a call to irq_domain_create_linear()

Can we please stick to the problem at hand and not drift into other
considerations which do not matter at all?

> Is the handling of MSI different, and that is why we need
> two domains? (Sorry, I did not understand that part well.)

Let me repeat it again, then:
- You have a top-level MSI domain that is completely virtual, mapping a
virtual hwirq to the virtual interrupt. Nothing to see here.
- You have your own irqdomain, associated with your own irq_chip, which
does what it needs to do talking to the HW and allocating interrupts.

> When I looked at drivers/pci/host/pci-hyperv.c
> they seem to have a single pci_msi_create_irq_domain call,
> no call to domain_add or domain_create.
> And they have a single struct irq_chip.

Which is not using the generic MSI layer the way we do either.

> 
>> You don't need to tell it anything about the number of interrupts you
>> manage. As for your private structure, you've already given it to your
>> low level domain, and there is no need to propagate it any further.
> 
> My main issue is that in the ack callback, I was in the "wrong"
> domain, in that d->hwirq was not the MSI number. So I thought
> I needed a single irq_domain.

No. You need two, but you only need to manage yours.

> Is there a function to map virq to the hwirq in any domain?

Be more precise. If you want the hwirq associated with the view of a
virq in a given domain, that's the hwirq field in the corresponding
irq_data structure. Or are you after something else?

	M.
-- 
Jazz is not dead. It just smells funny...



More information about the linux-arm-kernel mailing list