Difference between revisions of "VPN via the TUN/TAP device"
m (→Granting container an access to TUN/TAP: : formatting) |
(--devnodes should be enough) |
||
Line 14: | Line 14: | ||
== Granting container an access to TUN/TAP == | == Granting container an access to TUN/TAP == | ||
+ | |||
Allow your container to use the tun/tap device by running the following commands on the host node: | Allow your container to use the tun/tap device by running the following commands on the host node: | ||
CTID=101 | CTID=101 | ||
− | |||
vzctl set $CTID --devnodes net/tun:rw --save | vzctl set $CTID --devnodes net/tun:rw --save | ||
− | |||
vzctl set $CTID --devices c:10:200:rw --save | vzctl set $CTID --devices c:10:200:rw --save | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
== Configuring VPN inside container == | == Configuring VPN inside container == |
Revision as of 19:13, 18 February 2015
This article describes how to use VPN via the TUN/TAP device inside a container.
Contents
Kernel TUN/TAP support
OpenVZ supports VPN inside a container via kernel TUN/TAP module and device. To allow container #101 to use the TUN/TAP device the following should be done:
Make sure the tun module has been already loaded on the hardware node:
lsmod | grep tun
If it is not there, use the following command to load tun module:
modprobe tun
To make sure that tun module will be automatically loaded on every reboot you can also add it or into /etc/modules.conf
(on RHEL see /etc/sysconfig/modules/
directory).
Granting container an access to TUN/TAP
Allow your container to use the tun/tap device by running the following commands on the host node:
CTID=101 vzctl set $CTID --devnodes net/tun:rw --save vzctl set $CTID --devices c:10:200:rw --save
Configuring VPN inside container
After the configuration steps above are done it is possible to use VPN software working with TUN/TAP inside container just like on a usual standalone Linux box.
The following software can be used for VPN with TUN/TAP:
- Tinc (http://tinc-vpn.org)
- OpenVPN (http://openvpn.net)
- Virtual TUNnel (http://vtun.sourceforge.net)
Reaching hosts behind VPN container
In order to reach hosts behind VPN container you must configure it to use a VETH interface instead a VENET one, at least with an OpenVPN server.
With a VENET interface you will only reach the VPN container.
To use a VETH device follow Veth article.
If you insist on using a VENET interface and need to reach hosts behind the OpenVPN VE then you can use source NAT. You need to mangle source packets so that they appear to originate from the OpenVPN server VE.
Tinc problems
Using the default venet0:0 interface on the container, tinc seems to have problems as it complains the port 655 is already used on 0.0.0.0.
Netstat shows that the port 655 is available:
root@132 / [3]# netstat -l Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 localhost.localdom:8001 *:* LISTEN tcp 0 0 *:2223 *:* LISTEN tcp6 0 0 [::]:2223 [::]:* LISTEN udp6 0 0 [::]:talk [::]:* udp6 0 0 [::]:ntalk [::]:* Active UNIX domain sockets (only servers) Proto RefCnt Flags Type State I-Node Path unix 2 [ ACC ] STREAM LISTENING 4831020 /var/run/uml-utilities/uml_switch.ctl
Starting the Tincd daemon where it complains that port 655 is not available:
root@132 / [4]# tincd -n myvpn root@132 / [5]# tail -f /var/log/syslog Jul 26 14:08:01 132 /USR/SBIN/CRON[15159]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jul 26 14:37:42 132 -- MARK -- Jul 26 14:57:42 132 -- MARK -- Jul 26 15:08:01 132 /USR/SBIN/CRON[15178]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jul 26 15:11:23 132 tinc.myvpn[15139]: Got TERM signal Jul 26 15:11:23 132 tinc.myvpn[15139]: Terminating Jul 26 15:11:37 132 tinc.myvpn[15191]: tincd 1.0.8 (Aug 14 2007 13:51:23) starting, debug level 0 Jul 26 15:11:37 132 tinc.myvpn[15191]: /dev/net/tun is a Linux tun/tap device (tun mode) Jul 26 15:11:37 132 tinc.myvpn[15191]: Can't bind to 0.0.0.0 port 655/tcp: Address already in use Jul 26 15:11:37 132 tinc.myvpn[15191]: Ready ^C root@132 / [6]#
An echo to Bindv6only (see discussion here) seems to resolve the problem:
root@132 / [12]# echo 1 > /proc/sys/net/ipv6/bindv6only
Or put in your /etc/sysctl.conf file:
net.ipv6.bindv6only = 1
Then apply the changes with:
root@132 / [14]# sysctl -p
The tunctl problem
Unfortunately, you are limited to non-persistent tunnels inside the VEs:
# tunctl enabling TUNSETPERSIST: Operation not permitted
Get a patched tunctl here, and run it with the -n option. It will create a non-persistent tun device and sleep instead of terminating, to keep the device from deletion. To remove the tunnel, kill the tunctl process.
Troubleshooting
If NAT is needed within the VE, this error will occur on attempts to use NAT:
# iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o venet0 -j MASQUERADE iptables v1.4.3.2: can't initialize iptables table `nat': Table does not exist (do you need to insmod?) Perhaps iptables or your kernel needs to be upgraded.
The solution is given here:
http://kb.parallels.com/en/5228
Also see page 69-70 of:
http://download.openvz.org/doc/OpenVZ-Users-Guide.pdf
Note that the above steps do not solve the problem if a gentoo VE sits on a Centos HN; it's still an unsolved mystery.