RFC: OpenConnect enhancements
Kevin Cernekee
cernekee at gmail.com
Mon Oct 1 14:12:24 EDT 2012
On Sun, Sep 30, 2012 at 11:42 PM, David Edmondson <dme at dme.org> wrote:
>> This now seems to work reasonably well with Dante, e.g. "socksify
>> telnet foo.somedomain.com". Since ocproxy only passes TCP, I told
>> Dante to fake out gethostbyname(), and just pass the hostname string
>> in the SOCKS connection request instead.
>>
>> tsocks and Opera were both able to connect through the proxy, but they
>> ran their DNS lookups locally, so addressing internal hosts by name
>> was problematic.
>
> My own use case requires only that netcat work through the proxy, so I'm not familiar with those other applications. Is it a problem with tsocks and Opera that they do local DNS lookup or a problem with the proxy code?
Dante (socksify) overrides the DNS resolver functions to return a fake
IP, then when connect() is called with the fake IP, sends the hostname
over the SOCKS connection for lookup on the proxy side.
tsocks has a compile-time option to override the resolver functions,
but the only thing that does is set a resolver flag to use TCP instead
of UDP. Then it calls the stock resolver. I was just playing with
the precompiled Debian packages for these programs, so I didn't bother
rebuilding tsocks to see if this option worked.
AFAICT Opera still uses local DNS only:
http://www.opera.com/docs/changelogs/unix/1110/ (scroll down to "SOCKS
Proxy")
So, these are all application-level issues, not ocproxy issues. All
three programs do support SOCKS5 but only Dante seems to make full use
of proxy-based DNS resolution.
>> I am still concerned about memory usage, which keeps growing with each
>> connection. Maybe the thread startup/teardown should work from a
>> fixed "pool" like Apache does; currently it is dynamic.
>
> Rather than have a pair of threads for each connection we could have a single thread for "reading from local sockets" and another for "reading from lwip connections" (the pair required due to the differing API).
The existing code uses a blocking call to netconn_recv(), which
basically winds up waiting on a pthread_cond variable. With threads,
many of these can block in parallel.
If we didn't want a new thread to watch for new data on each netconn,
I see at least two options:
1) Use netconn_new_with_callback() instead of netconn_new(), then have
the callback trigger a nonblocking netconn_recv() operation. (I think
this runs from the TCP thread, so it's probably best just to wake up
our other thread to perform the recv.)
2) Switch to the socket API, and use lwip_select()
I will play around with this a little bit and let you know what I find.
More information about the openconnect-devel
mailing list