[ath9k-devel] Dropped frames (unauthorized port) in AP mode
Thu Jun 20 20:12:55 PDT 2013
* On 18.06.2013 11:25 PM, Mihai Moldovan wrote:
> Looking at the kernel source (net/mac80211/tx.c), this condition is being triggered:
> if (unlikely(!ieee80211_vif_is_mesh(&sdata->vif) &&
> !is_multicast_ether_addr(hdr.addr1) && !authorized &&
> (cpu_to_be16(ethertype) != sdata->control_port_protocol ||
> !ether_addr_equal(sdata->vif.addr, skb->data + ETH_ALEN))))
Fugly print-debugged this statement, and I'm heavily confused.
The subcondition "cpu_to_be16(ethertype) != sdata->control_port_protocol" is
true for me!
Thus, I checked "sdata->control_port_protocol", which is ETH_P_PAE (0x888E)...
i.e., 802.1X. Great for WiFi authentication.
"ethertype", on the other hand, is ETH_P_DDCMP (0x0006) which left me totally
confused! How is this possible? ethertype should ideally also be ETH_P_PAE,
definitely not some only internally used DECnet port protocol.
Yes, CONFIG_DECNET is turned on in my kernel config, but I'm not (actively) even
ethertype is set from the socket buffer's data (ethertype = (skb->data << 8)
| skb->data), but what is generating this packet? If the ethertype fetching
is actually correct in net/mac80211/tx.c, what would ever set it to 0x0006?
CCing Johannes Berg, as git is "blaming" him for those line(s) (originally Jiri
Benc, but I haven't seen list posts in a while, assuming he's not maintaining
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 4506 bytes
Desc: S/MIME Cryptographic Signature
More information about the Hostap