ath11k and vfio-pci support

Baochen Qiang quic_bqiang at quicinc.com
Tue Jan 16 02:08:35 PST 2024



On 1/16/2024 1:46 AM, Alex Williamson wrote:
> On Sun, 14 Jan 2024 16:36:02 +0200
> Kalle Valo <kvalo at kernel.org> wrote:
> 
>> Baochen Qiang <quic_bqiang at quicinc.com> writes:
>>
>>>>> Strange that still fails. Are you now seeing this error in your
>>>>> host or your Qemu? or both?
>>>>> Could you share your test steps? And if you can share please be as
>>>>> detailed as possible since I'm not familiar with passing WLAN
>>>>> hardware to a VM using vfio-pci.
>>>>
>>>> Just in Qemu, the hardware works fine on my host machine.
>>>> I basically follow this guide to set it up, its written in the
>>>> context of GPUs/libvirt but the host setup is exactly the same. By
>>>> no means do you need to read it all, once you set the vfio-pci.ids
>>>> and see your unclaimed adapter you can stop:
>>>> https://wiki.archlinux.org/title/PCI_passthrough_via_OVMF
>>>> In short you should be able to set the following host kernel options
>>>> and reboot (assuming your motherboard/hardware is compatible):
>>>> intel_iommu=on iommu=pt vfio-pci.ids=17cb:1103
>>>> Obviously change the device/vendor IDs to whatever ath11k hw you
>>>> have. Once the host is rebooted you should see your wlan adapter as
>>>> UNCLAIMED, showing the driver in use as vfio-pci. If not, its likely
>>>> your motherboard just isn't compatible, the device has to be in its
>>>> own IOMMU group (you could try switching PCI ports if this is the
>>>> case).
>>>> I then build a "kvm_guest.config" kernel with the driver/firmware
>>>> for ath11k and boot into that with the following Qemu options:
>>>> -enable-kvm -device -vfio-pci,host=<PCI address>
>>>> If it seems easier you could also utilize IWD's test-runner which
>>>> handles launching the Qemu kernel automatically, detecting any
>>>> vfio-devices and passes them through and mounts some useful host
>>>> folders into the VM. Its actually a very good general purpose tool
>>>> for kernel testing, not just for IWD:
>>>> https://git.kernel.org/pub/scm/network/wireless/iwd.git/tree/doc/test-runner.txt
>>>> Once set up you can just run test-runner with a few flags and you'll
>>>> boot into a shell:
>>>> ./tools/test-runner -k <kernel-image> --hw --start /bin/bash
>>>> Please reach out if you have questions, thanks for looking into
>>>> this.
>>>
>>> Thanks for these details. I reproduced this issue by following your guide.
>>>
>>> Seems the root cause is that the MSI vector assigned to WCN6855 in
>>> qemu is different with that in host. In my case the MSI vector in qemu
>>> is [Address: fee00000  Data: 0020] while in host it is [Address:
>>> fee00578 Data: 0000]. So in qemu ath11k configures MSI vector
>>> [Address: fee00000 Data: 0020] to WCN6855 hardware/firmware, and
>>> firmware uses that vector to fire interrupts to host/qemu. However
>>> host IOMMU doesn't know that vector because the real vector is
>>> [Address: fee00578  Data: 0000], as a result host blocks that
>>> interrupt and reports an error, see below log:
>>>
>>> [ 1414.206069] DMAR: DRHD: handling fault status reg 2
>>> [ 1414.206081] DMAR: [INTR-REMAP] Request device [02:00.0] fault index
>>> 0x0 [fault reason 0x25] Blocked a compatibility format interrupt
>>> request
>>> [ 1414.210334] DMAR: DRHD: handling fault status reg 2
>>> [ 1414.210342] DMAR: [INTR-REMAP] Request device [02:00.0] fault index
>>> 0x0 [fault reason 0x25] Blocked a compatibility format interrupt
>>> request
>>> [ 1414.212496] DMAR: DRHD: handling fault status reg 2
>>> [ 1414.212503] DMAR: [INTR-REMAP] Request device [02:00.0] fault index
>>> 0x0 [fault reason 0x25] Blocked a compatibility format interrupt
>>> request
>>> [ 1414.214600] DMAR: DRHD: handling fault status reg 2
>>>
>>> While I don't think there is a way for qemu/ath11k to get the real MSI
>>> vector from host, I will try to read the vfio code to check further.
>>> Before that, to unblock you, a possible hack is to hard code the MSI
>>> vector in qemu to the same as in host, on condition that the MSI
>>> vector doesn't change.
>>
>> Baochen, awesome that you were able to debug this further. Now we at
>> least know what's the problem.
> 
> It's an interesting problem, I don't think we've seen another device
> where the driver reads the MSI register in order to program another
> hardware entity to match the MSI address and data configuration.
> 
> When assigning a device, the host and guest use entirely separate
> address spaces for MSI interrupts.  When the guest enables MSI, the
> operation is trapped by the VMM and triggers an ioctl to the host to
> perform an equivalent configuration.  Generally the physical device
> will interrupt within the host where it may be directly attached to KVM
> to signal the interrupt, trigger through the VMM, or where
> virtualization hardware supports it, the interrupt can directly trigger
> the vCPU.   From the VM perspective, the guest address/data pair is used
> to signal the interrupt, which is why it makes sense to virtualize the
> MSI registers.
Hi Alex, could you help elaborate more? why from the VM perspective MSI 
virtualization is necessary?

And, maybe a stupid question, is that possible VM/KVM or vfio only 
virtualize write operation to MSI register but leave read operation 
un-virtualized? I am asking this because in that way ath11k may get a 
chance to run in VM after getting the real vector.

> 
> Off hand I don't have a good solution for this, the hardware is
> essentially imposing a unique requirement for MSI programming that the
> driver needs visibility of the physical MSI address and data.  It's
> conceivable that device specific code could either make the physical
> address/data pair visible to the VM or trap the firmware programming to
> inject the correct physical values.  Is there somewhere other than the
> standard MSI capability in config space that the driver could learn the
> physical values, ie. somewhere that isn't virtualized?  Thanks,
I don't think we have such capability in configuration space.

> 
> Alex
> 



More information about the ath11k mailing list