[Patch] Allows tun_mainloop to handle multiple packets in single read.
Kazuyoshi Aizawa
admin2 at whiteboard.ne.jp
Sun Nov 27 11:00:32 EST 2011
Hi David,
Sorry for my late response.
> I don't have my company VPN certificates installed on the Solaris box,
> so I used 'openconnect --cookieonly' on my laptop and then passed that
> cookie to 'openconnect --cookie-on-stdin' on Solaris. But when you hit
> Ctrl-C, openconnect will log out and the session will be closed, so the
> cookie will no longer work. Intending to save myself the hassle of
> running 'openconnect --cookieonly' again to get a new cookie, I hit
> Ctrl-\ to *kill* openconnect instead of Ctrl-C.
>
> Under Linux, the tun0 device would then completely disappear, and
> running openconnect again would work correct. On Solaris, though, it
> didn't. My next run of openconnect gave me:
>
> Can't select unit: File exists
> Set up tun device failed
>
> I have a *vague* recollection of being able to use 'ifconfig tun0
> unplumb' to recover from this in the past, with OpenSolaris. My current
> VM is OpenIndiana oi_151a. And 'ifconfig tun0 unplumb' gives me:
> ifconfig: cannot unplumb tun0: Invalid argument provided
>
> Is there something that OpenConnect should be doing differently to
> recover from this automatically?
One of option would be to use I_LINK/I_UNLINK ioctl command instead of
I_PLINK/I_PUNLINK. So that streams associated with tun driver will disappear
when corresponding file descriptor is close.
If I_PLINK(Persistent Link) is specified, driver's stream will remain even fd
is closed because of termination of the user process.
ifconfig command does really need this, because it is always terminated after
processing.
But since openconnect is always running as a daemon, it would be OK to
use I_LINK/I_UNLINK. And it would benefit to remove tun driver's stream
automatically when it is terminated.
I know that openvpn uses I_PLINK, thought it WAS using I_LINK...
I'm sorry but I'm not sure how come it was changed...and I don't know what
is the benefit of it. there might be but...
On the other hand, I modified my tunctl command for Solaris.
http://www.whiteboard.ne.jp/~admin2/tuntap/#tunctl
Now it is able to store muxid to ip module and is also able to retrieve
muxid from ip module.
So you can use this new tunctl command to remove sticky tun interface
when your process was terminated abnormally.
> Unless there's some other reason why getmsg() itself is slower than
> read(), perhaps?
If we know the boundary of each packet, we should use read(2) and retrieve
as much packets as possible in single read(2) to avoid calling of system calls.
But, as you know, it can't. And if we read single packet at a time, I
think there are
no significant differences of performance between read(2) and
getmsg(2). It is just
my thought though.
hope this helps..
Regards,
Kazuyoshi
2011/11/24 David Woodhouse <dwmw2 at infradead.org>:
> On Wed, 2011-11-23 at 19:46 +0900, Kazuyoshi Aizawa wrote:
>> It is guaranteed that getmsg would return single message from
>> kernel side buffer, aka stream head, and we can expect single
>> message contain single packet data as tun driver treats packet
>> as single message. By using getmsg, we don't need to care
>> about the boundary of the packet data.
>
> Hm, looking at the read(2) manual page¹, shouldn't we be able to put the
> tun_fd into 'message-discard' mode, which would make it behave like the
> tun device does on other operating systems?
>
> I might have suggested that if I'd realised it yesterday, but now I'm
> inclined to stick with the version of your patch that I committed
> earlier. It's actually slightly more efficient using getmsg() because of
> the MOREDATA flag. With read() we have to keep reading from the tun_fd
> until we get EAGAIN, but with getmsg() we can avoid that final
> unnecessary system call, because we *know* there's nothing left when the
> MOREDATA bit isn't set.
>
> Unless there's some other reason why getmsg() itself is slower than
> read(), perhaps?
>
> --
> dwmw2
>
> ¹ http://pubs.opengroup.org/onlinepubs/007904975/functions/read.html
>
More information about the openconnect-devel
mailing list