Difference between revisions of "VPN via the TUN/TAP device"

From OpenVZ Virtuozzo Containers Wiki
Jump to: navigation, search
(Granting container an access to TUN/TAP)
m (Reverted edits by 37.201.195.46 (talk) to last revision by Sergey Bronnikov)
 
(12 intermediate revisions by 6 users not shown)
Line 1: Line 1:
 +
<translate>
 +
<!--T:1-->
 
This article describes how to use VPN via the TUN/TAP device inside a [[container]].
 
This article describes how to use VPN via the TUN/TAP device inside a [[container]].
  
== Kernel TUN/TAP support ==
+
== Kernel TUN/TAP support == <!--T:2-->
 
OpenVZ supports VPN inside a container via kernel TUN/TAP module and device.
 
OpenVZ supports VPN inside a container via kernel TUN/TAP module and device.
 
To allow container #101 to use the TUN/TAP device the following should be done:
 
To allow container #101 to use the TUN/TAP device the following should be done:
  
 +
<!--T:3-->
 
Make sure the '''tun''' module has been already loaded on the [[hardware node]]:
 
Make sure the '''tun''' module has been already loaded on the [[hardware node]]:
 
  lsmod | grep tun
 
  lsmod | grep tun
  
 +
<!--T:4-->
 
If it is not there, use the following command to load '''tun''' module:
 
If it is not there, use the following command to load '''tun''' module:
 
  modprobe tun
 
  modprobe tun
  
 +
<!--T:5-->
 
To make sure that '''tun''' module will be automatically loaded on every reboot you can also add it or into <code>/etc/modules.conf</code> (on RHEL see <code>/etc/sysconfig/modules/</code> directory).
 
To make sure that '''tun''' module will be automatically loaded on every reboot you can also add it or into <code>/etc/modules.conf</code> (on RHEL see <code>/etc/sysconfig/modules/</code> directory).
  
== Granting container an access to TUN/TAP ==
+
== Granting container an access to TUN/TAP == <!--T:6-->
 +
 
 +
<!--T:7-->
 
Allow your container to use the tun/tap device by running the following commands on the host node:
 
Allow your container to use the tun/tap device by running the following commands on the host node:
  
  vzctl set 101 --devnodes net/tun:rw --save
+
  <!--T:8-->
+
CTID=101
  vzctl set 101 --devices c:10:200:rw --save
+
  vzctl set $CTID --devnodes net/tun:rw --capability net_admin:on --save
 
vzctl set 101 --capability net_admin:on --save
 
 
vzctl exec 101 mkdir -p /dev/net
 
 
vzctl exec 101 chmod 600 /dev/net/tun
 
  
== Configuring VPN inside container ==
+
== Configuring VPN inside container == <!--T:9-->
 
After the configuration steps above are done it is possible to use VPN software working with TUN/TAP inside
 
After the configuration steps above are done it is possible to use VPN software working with TUN/TAP inside
 
container just like on a usual standalone Linux box.
 
container just like on a usual standalone Linux box.
  
 +
<!--T:10-->
 
The following software can be used for VPN with TUN/TAP:
 
The following software can be used for VPN with TUN/TAP:
 
* Tinc (http://tinc-vpn.org)
 
* Tinc (http://tinc-vpn.org)
Line 35: Line 37:
 
* Virtual TUNnel (http://vtun.sourceforge.net)
 
* Virtual TUNnel (http://vtun.sourceforge.net)
  
== Reaching hosts behind VPN container ==
+
== Reaching hosts behind VPN container == <!--T:11-->
 
In order to reach hosts behind VPN container you must configure it to use a VETH interface instead a VENET one, at least with an OpenVPN server.
 
In order to reach hosts behind VPN container you must configure it to use a VETH interface instead a VENET one, at least with an OpenVPN server.
  
 +
<!--T:12-->
 
With a VENET interface you will only reach the VPN container.
 
With a VENET interface you will only reach the VPN container.
  
 +
<!--T:13-->
 
To use a VETH device follow [[Veth]] article.
 
To use a VETH device follow [[Veth]] article.
  
 +
<!--T:14-->
 
If you insist on using a VENET interface and need to reach hosts behind the OpenVPN VE then you can use source NAT. You need to mangle source packets so that they appear to originate from the OpenVPN server VE.
 
If you insist on using a VENET interface and need to reach hosts behind the OpenVPN VE then you can use source NAT. You need to mangle source packets so that they appear to originate from the OpenVPN server VE.
  
== Tinc problems ==
+
== Tinc problems == <!--T:15-->
  
 +
<!--T:16-->
 
Using the default venet0:0 interface on the container, tinc seems to have problems as it complains the port 655 is already used on 0.0.0.0.
 
Using the default venet0:0 interface on the container, tinc seems to have problems as it complains the port 655 is already used on 0.0.0.0.
  
 +
<!--T:17-->
 
Netstat shows that the port 655 is available:
 
Netstat shows that the port 655 is available:
  
 +
<!--T:18-->
 
<pre>
 
<pre>
 
root@132 / [3]# netstat -l
 
root@132 / [3]# netstat -l
Line 64: Line 72:
 
</pre>
 
</pre>
  
 +
<!--T:19-->
 
Starting the Tincd daemon where it complains that port 655 is not available:
 
Starting the Tincd daemon where it complains that port 655 is not available:
  
 +
<!--T:20-->
 
<pre>
 
<pre>
 
root@132 / [4]# tincd -n myvpn
 
root@132 / [4]# tincd -n myvpn
Line 83: Line 93:
 
</pre>
 
</pre>
  
 +
<!--T:21-->
 
An echo to Bindv6only (see [http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=440150 discussion here]) seems to resolve the problem:
 
An echo to Bindv6only (see [http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=440150 discussion here]) seems to resolve the problem:
  
 +
<!--T:22-->
 
<pre>
 
<pre>
 
root@132 / [12]# echo 1 > /proc/sys/net/ipv6/bindv6only
 
root@132 / [12]# echo 1 > /proc/sys/net/ipv6/bindv6only
 
</pre>
 
</pre>
  
 +
<!--T:23-->
 
Or put in your /etc/sysctl.conf file:
 
Or put in your /etc/sysctl.conf file:
  
 +
<!--T:24-->
 
<pre>
 
<pre>
 
net.ipv6.bindv6only = 1
 
net.ipv6.bindv6only = 1
 
</pre>
 
</pre>
  
 +
<!--T:25-->
 
Then apply the changes with:
 
Then apply the changes with:
  
 +
<!--T:26-->
 
<pre>
 
<pre>
 
root@132 / [14]# sysctl -p
 
root@132 / [14]# sysctl -p
 
</pre>
 
</pre>
  
== The tunctl problem ==
+
== The tunctl problem == <!--T:27-->
 
Unfortunately, you are limited to [http://forum.openvz.org/index.php?t=msg&th=4280&goto=22066&#msg_22066 non-persistent tunnels inside the VEs]:
 
Unfortunately, you are limited to [http://forum.openvz.org/index.php?t=msg&th=4280&goto=22066&#msg_22066 non-persistent tunnels inside the VEs]:
  
 +
<!--T:28-->
 
<pre>
 
<pre>
 
# tunctl
 
# tunctl
Line 109: Line 126:
 
</pre>
 
</pre>
  
 +
<!--T:29-->
 
Get a patched tunctl [https://github.com/xl0/uml-utilities here], and run it with the -n option. It will create a non-persistent tun device and sleep instead of terminating, to keep the device from deletion. To remove the tunnel, kill the tunctl process.
 
Get a patched tunctl [https://github.com/xl0/uml-utilities here], and run it with the -n option. It will create a non-persistent tun device and sleep instead of terminating, to keep the device from deletion. To remove the tunnel, kill the tunctl process.
  
  
== Troubleshooting ==
+
== Troubleshooting == <!--T:30-->
 
If NAT is needed within the VE, this error will occur on attempts to use NAT:
 
If NAT is needed within the VE, this error will occur on attempts to use NAT:
  
  # iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o venet0 -j MASQUERADE
+
  <!--T:31-->
 +
# iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o venet0 -j MASQUERADE
 
  iptables v1.4.3.2: can't initialize iptables table `nat': Table does not exist (do you need to insmod?)
 
  iptables v1.4.3.2: can't initialize iptables table `nat': Table does not exist (do you need to insmod?)
 
  Perhaps iptables or your kernel needs to be upgraded.
 
  Perhaps iptables or your kernel needs to be upgraded.
  
The solution is given here:
+
<!--T:32-->
 
+
Solution:
http://kb.parallels.com/en/5228
+
* use recent kernel
 
+
* enable NAT inside CT:
Also see page 69-70 of:
+
: <code>vzctl set $CTID --netfilter full --save</code>
 
 
http://download.openvz.org/doc/OpenVZ-Users-Guide.pdf
 
 
 
Note that the above steps do not solve the problem if a gentoo VE sits on a Centos HN; it's still an unsolved mystery.
 
  
== External links ==
+
== External links == <!--T:33-->
 
* [http://vtun.sourceforge.net Virtual TUNnel]
 
* [http://vtun.sourceforge.net Virtual TUNnel]
 
* [http://openvpn.net OpenVPN]
 
* [http://openvpn.net OpenVPN]
 
* [http://tinc-vpn.org Tinc]
 
* [http://tinc-vpn.org Tinc]
 
* [http://openvpn.net/index.php/access-server/howto-openvpn-as/186-how-to-run-access-server-on-a-vps-container.html How to run OpenVPN Access Server in OpenVZ]
 
* [http://openvpn.net/index.php/access-server/howto-openvpn-as/186-how-to-run-access-server-on-a-vps-container.html How to run OpenVPN Access Server in OpenVZ]
* [http://kb.parallels.com/en/696 Parallels KB#696: Is VPN via the TUN/TAP device supported inside a Container?]
+
* [http://kb.odin.com/en/696 Odin KB #696: Is VPN via the TUN/TAP device supported inside a Container?]
 +
</translate>
 
[[Category: HOWTO]]
 
[[Category: HOWTO]]
 
[[Category: Networking]]
 
[[Category: Networking]]

Latest revision as of 09:28, 31 October 2017

<translate> This article describes how to use VPN via the TUN/TAP device inside a container.

Kernel TUN/TAP support[edit]

OpenVZ supports VPN inside a container via kernel TUN/TAP module and device. To allow container #101 to use the TUN/TAP device the following should be done:

Make sure the tun module has been already loaded on the hardware node:

lsmod | grep tun

If it is not there, use the following command to load tun module:

modprobe tun

To make sure that tun module will be automatically loaded on every reboot you can also add it or into /etc/modules.conf (on RHEL see /etc/sysconfig/modules/ directory).

Granting container an access to TUN/TAP[edit]

Allow your container to use the tun/tap device by running the following commands on the host node:

CTID=101

vzctl set $CTID --devnodes net/tun:rw --capability net_admin:on --save

Configuring VPN inside container[edit]

After the configuration steps above are done it is possible to use VPN software working with TUN/TAP inside container just like on a usual standalone Linux box.

The following software can be used for VPN with TUN/TAP:

Reaching hosts behind VPN container[edit]

In order to reach hosts behind VPN container you must configure it to use a VETH interface instead a VENET one, at least with an OpenVPN server.

With a VENET interface you will only reach the VPN container.

To use a VETH device follow Veth article.

If you insist on using a VENET interface and need to reach hosts behind the OpenVPN VE then you can use source NAT. You need to mangle source packets so that they appear to originate from the OpenVPN server VE.

Tinc problems[edit]

Using the default venet0:0 interface on the container, tinc seems to have problems as it complains the port 655 is already used on 0.0.0.0.

Netstat shows that the port 655 is available:

root@132 / [3]# netstat -l
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State      
tcp        0      0 localhost.localdom:8001 *:*                     LISTEN     
tcp        0      0 *:2223                  *:*                     LISTEN     
tcp6       0      0 [::]:2223               [::]:*                  LISTEN     
udp6       0      0 [::]:talk               [::]:*                             
udp6       0      0 [::]:ntalk              [::]:*                             
Active UNIX domain sockets (only servers)
Proto RefCnt Flags       Type       State         I-Node   Path
unix  2      [ ACC ]     STREAM     LISTENING     4831020  /var/run/uml-utilities/uml_switch.ctl

Starting the Tincd daemon where it complains that port 655 is not available:

root@132 / [4]# tincd -n myvpn
root@132 / [5]# tail -f /var/log/syslog
Jul 26 14:08:01 132 /USR/SBIN/CRON[15159]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Jul 26 14:37:42 132 -- MARK --
Jul 26 14:57:42 132 -- MARK --
Jul 26 15:08:01 132 /USR/SBIN/CRON[15178]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Jul 26 15:11:23 132 tinc.myvpn[15139]: Got TERM signal
Jul 26 15:11:23 132 tinc.myvpn[15139]: Terminating
Jul 26 15:11:37 132 tinc.myvpn[15191]: tincd 1.0.8 (Aug 14 2007 13:51:23) starting, debug level 0
Jul 26 15:11:37 132 tinc.myvpn[15191]: /dev/net/tun is a Linux tun/tap device (tun mode)
Jul 26 15:11:37 132 tinc.myvpn[15191]: Can't bind to 0.0.0.0 port 655/tcp: Address already in use
Jul 26 15:11:37 132 tinc.myvpn[15191]: Ready
^C
root@132 / [6]# 

An echo to Bindv6only (see discussion here) seems to resolve the problem:

root@132 / [12]# echo 1 > /proc/sys/net/ipv6/bindv6only

Or put in your /etc/sysctl.conf file:

net.ipv6.bindv6only = 1

Then apply the changes with:

root@132 / [14]# sysctl -p

The tunctl problem[edit]

Unfortunately, you are limited to non-persistent tunnels inside the VEs:

# tunctl
enabling TUNSETPERSIST: Operation not permitted

Get a patched tunctl here, and run it with the -n option. It will create a non-persistent tun device and sleep instead of terminating, to keep the device from deletion. To remove the tunnel, kill the tunctl process.


Troubleshooting[edit]

If NAT is needed within the VE, this error will occur on attempts to use NAT:

  1. iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o venet0 -j MASQUERADE
iptables v1.4.3.2: can't initialize iptables table `nat': Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.

Solution:

  • use recent kernel
  • enable NAT inside CT:
vzctl set $CTID --netfilter full --save

External links[edit]

</translate>