<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.openvz.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Major</id>
	<title>OpenVZ Virtuozzo Containers Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.openvz.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Major"/>
	<link rel="alternate" type="text/html" href="https://wiki.openvz.org/Special:Contributions/Major"/>
	<updated>2026-05-13T18:54:48Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.31.1</generator>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Talk:Virtual_Ethernet_device&amp;diff=5379</id>
		<title>Talk:Virtual Ethernet device</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Talk:Virtual_Ethernet_device&amp;diff=5379"/>
		<updated>2008-03-27T12:58:18Z</updated>

		<summary type="html">&lt;p&gt;Major: /* Multiple persistent Veth interfaces */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Under Common Configurations -&amp;gt; Simple configuration -&amp;gt; Configure devices in VE0, the example shows&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# echo 1 &amp;gt; /proc/sys/net/ipv4/conf/eth0/forwarding&lt;br /&gt;
[host-node]# echo 1 &amp;gt; /proc/sys/net/ipv4/conf/eth0/proxy_arp&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
being run on the host node, VE0.  I don't think this can be correct, because eth0 exists in VE 101, not VE0.&lt;br /&gt;
-- {{unsigned|Andrex}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
: This is correct, because we need to enable forwarding on both network interfaces in VE0 (host-node eth0 and veth101.0) to allow the network packets on one network interface (host-node eth0) to be forwarded to another network interface (veth101.0). The same thing with proxy ARP, we need to enable it on host-node eth0 and veth101.0 network interfaces. --[[User:Major|Major]] 08:42, 17 July 2006 (EDT)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Multiple persistent Veth interfaces==&lt;br /&gt;
The script for making a persistent bridge get added to a particular host-bridge on startup VPS/VE, doesn't seem to account for having multiple bridges. I have a bridge to a private lan for administrative functions and a bridge to the public network for inbound connections for services. How might one ensure that the vethVEID.x's are hooked up to the bridges on the host every time the VE is started? --[[User:Btrotter|Btrotter]] 05:34, 17 March 2008 (EDT)&lt;br /&gt;
&lt;br /&gt;
: Please take a look on articles [http://wiki.openvz.org/Using_private_IPs_for_Hardware_Nodes Using private IPs for Hardware Nodes] and [http://vireso.blogspot.com/2008/02/2-veth-with-2-brindges-on-openvz-at.html Bridged Networks for OpenVZ]. There you will find scripts which will help you to manage 2 bridges, and you can simply extend this script to use more then 2 bridges. --[[User:Major|Major]] 15:59, 27 March 2008 (MSK)&lt;/div&gt;</summary>
		<author><name>Major</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Talk:Virtual_Ethernet_device&amp;diff=5378</id>
		<title>Talk:Virtual Ethernet device</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Talk:Virtual_Ethernet_device&amp;diff=5378"/>
		<updated>2008-03-27T12:57:18Z</updated>

		<summary type="html">&lt;p&gt;Major: /* Answer: Multiple persistent Veth interfaces */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Under Common Configurations -&amp;gt; Simple configuration -&amp;gt; Configure devices in VE0, the example shows&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# echo 1 &amp;gt; /proc/sys/net/ipv4/conf/eth0/forwarding&lt;br /&gt;
[host-node]# echo 1 &amp;gt; /proc/sys/net/ipv4/conf/eth0/proxy_arp&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
being run on the host node, VE0.  I don't think this can be correct, because eth0 exists in VE 101, not VE0.&lt;br /&gt;
-- {{unsigned|Andrex}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
: This is correct, because we need to enable forwarding on both network interfaces in VE0 (host-node eth0 and veth101.0) to allow the network packets on one network interface (host-node eth0) to be forwarded to another network interface (veth101.0). The same thing with proxy ARP, we need to enable it on host-node eth0 and veth101.0 network interfaces. --[[User:Major|Major]] 08:42, 17 July 2006 (EDT)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Multiple persistent Veth interfaces==&lt;br /&gt;
The script for making a persistent bridge get added to a particular host-bridge on startup VPS/VE, doesn't seem to account for having multiple bridges. I have a bridge to a private lan for administrative functions and a bridge to the public network for inbound connections for services. How might one ensure that the vethVEID.x's are hooked up to the bridges on the host every time the VE is started? --[[User:Btrotter|Btrotter]] 05:34, 17 March 2008 (EDT)&lt;br /&gt;
&lt;br /&gt;
: Please take a look on articles [http://wiki.openvz.org/Using_private_IPs_for_Hardware_Nodes Using private IPs for Hardware Nodes] and [http://vireso.blogspot.com/2008/02/2-veth-with-2-brindges-on-openvz-at.html Bridged Networks for OpenVZ]. There you will find scripts which will help you to manage 2 bridges, and you can simply extend this script to use more then 2 bridges. --[[User:Major|Major]]&lt;/div&gt;</summary>
		<author><name>Major</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=VLAN&amp;diff=2723</id>
		<title>VLAN</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=VLAN&amp;diff=2723"/>
		<updated>2007-02-05T09:53:35Z</updated>

		<summary type="html">&lt;p&gt;Major: VLAN&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A virtual LAN, commonly known as a '''vLAN''' or as a '''VLAN''', is a method of creating independent logical networks within a physical network. Several VLANs can co-exist within such a network. This helps in reducing the broadcast domain and administratively separating logical segments of LAN (like company departments) which should not exchange data using LAN (they still can by routing).&lt;br /&gt;
&lt;br /&gt;
A VLAN consists of a network of computers that behave as if connected to the same wire - even though they may actually be physically connected to different segments of a LAN. Network administrators configure VLANs through software rather than hardware, which makes them extremely flexible. One of the biggest advantages of VLANs emerges when physically moving a computer to another location: it can stay on the same VLAN without the need for any hardware reconfiguration.&lt;br /&gt;
&lt;br /&gt;
VLAN 1 is the default VLAN; it can never be deleted. All untagged traffic falls into this VLAN by default.&lt;br /&gt;
&lt;br /&gt;
==Advantages of VLAN==&lt;br /&gt;
* Increases the number of '''broadcast domains''' but reduces the size of each '''broadcast domain''', which in turn reduces network traffic and increases network security (both of which are hampered in case of single large broadcast domain)&lt;br /&gt;
* Reduces management effort to create subnetworks&lt;br /&gt;
* Reduces hardware requirement, as networks can be logically instead of physically separated&lt;br /&gt;
* Increases control over multiple traffic types.&lt;br /&gt;
&lt;br /&gt;
== Common VLAN configurations for VE ==&lt;br /&gt;
VLAN can be used in following ways:&lt;br /&gt;
* Create VLAN device on physical network interface (eth0) and move it (VLAN device) to VE:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
host #  vconfig add eth0 &amp;lt;vlan_id&amp;gt;&lt;br /&gt;
host #  vzctl set &amp;lt;VEID&amp;gt; --netdev_add eth0.&amp;lt;vlan_id&amp;gt; --save&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* Create VLAN device inside VE on veth device&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ve #  vconfig add eth0 &amp;lt;vlan_id&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The second option is available only in kernel with virtualized VLAN (since 2.6.18-028test005 version).&lt;/div&gt;</summary>
		<author><name>Major</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Demo_scripts&amp;diff=2441</id>
		<title>Demo scripts</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Demo_scripts&amp;diff=2441"/>
		<updated>2006-10-30T16:59:33Z</updated>

		<summary type="html">&lt;p&gt;Major: /* Massive VE load */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The following demo scripts (scenarios) can be used to show advantages of OpenVZ.&lt;br /&gt;
&lt;br /&gt;
== Full VE lifecycle ==&lt;br /&gt;
&lt;br /&gt;
Create VE, set IP, start, add user, enter, exec, show ps -axf output inside VE, stop, and destroy. It should take two minutes (&amp;quot;compare that to a time you need to deploy a new (non-virtual) server!&amp;quot;). During the demonstration, describe what's happening and why.&lt;br /&gt;
&lt;br /&gt;
Here are the example commands needed:&lt;br /&gt;
&lt;br /&gt;
 # VE=123&lt;br /&gt;
 # IP=10.1.1.123&lt;br /&gt;
 # sed -i &amp;quot;/$IP /d&amp;quot; ~/.ssh/&lt;br /&gt;
 # time vzctl create $VE --ostemplate fedora-core-5-i386-default&lt;br /&gt;
 # vzctl set $VE --ipadd $IP --hostname newVE --save&lt;br /&gt;
 # vzctl start $VE&lt;br /&gt;
 # vzctl exec $VE ps axf&lt;br /&gt;
 # vzctl set $VE --userpasswd guest:secret --save&lt;br /&gt;
 # ssh guest@$IP&lt;br /&gt;
 [newVE]# ps axf&lt;br /&gt;
 [newVE]# logout&lt;br /&gt;
 # vzctl stop $VE&lt;br /&gt;
 # vzctl destroy $VE&lt;br /&gt;
&lt;br /&gt;
== Massive VE creation ==&lt;br /&gt;
&lt;br /&gt;
Create/start 50 or 100 VEs in a shell loop. Shows fast deployment and high density.&lt;br /&gt;
&lt;br /&gt;
Here are the example commands needed:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# time for ((VE=200; VE&amp;lt;250; VE++)); do \&lt;br /&gt;
&amp;gt;  time vzctl create $VE --ostemplate fedora-core-5-i386-default; \&lt;br /&gt;
&amp;gt;  vzctl start $VE; \&lt;br /&gt;
&amp;gt; done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Massive VE load ==&lt;br /&gt;
&lt;br /&gt;
Use VEs from previous item — load those by &amp;lt;code&amp;gt;ab&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;http_load&amp;lt;/code&amp;gt;. This demo shows that multiple VEs are working just fine, with low response time etc.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# for ((VE=200; VE&amp;lt;250; VE++)); do \&lt;br /&gt;
&amp;gt;  vzctl set $VE --ipadd 10.1.1.$VE --save; \&lt;br /&gt;
&amp;gt; done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
On another machine:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# rpm -ihv http_load&lt;br /&gt;
# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
FIXME: http_load commands&lt;br /&gt;
&lt;br /&gt;
== Live migration ==&lt;br /&gt;
&lt;br /&gt;
If you have two boxes, do &amp;quot;&amp;lt;code&amp;gt;vzmigrate --online&amp;lt;/code&amp;gt;&amp;quot; from one box to another. You can use, say, &amp;lt;code&amp;gt;xvnc&amp;lt;/code&amp;gt; in a VE and &amp;lt;code&amp;gt;vncclient&amp;lt;/code&amp;gt; to connect to it, then run &amp;lt;code&amp;gt;xscreensaver-demo&amp;lt;/code&amp;gt; and while the picture is moving do a live migration. You'll show &amp;lt;code&amp;gt;xscreensaver&amp;lt;/code&amp;gt; stalls for a few seconds but then keeps running — on another machine! That looks amazing, to say at least.&lt;br /&gt;
&lt;br /&gt;
FIXME: commands, setup, vnc template.&lt;br /&gt;
&lt;br /&gt;
== Resource management ==&lt;br /&gt;
Below scenarios aims to show how OpenVZ resource management works.&lt;br /&gt;
&lt;br /&gt;
=== [[UBC]] protection ===&lt;br /&gt;
&lt;br /&gt;
==== fork() bomb ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# while [ true ]; do \&lt;br /&gt;
&amp;gt;     while [ true ]; do \&lt;br /&gt;
&amp;gt;         echo &amp;quot; &amp;quot; &amp;gt; /dev/null;&lt;br /&gt;
&amp;gt;     done &amp;amp;&lt;br /&gt;
&amp;gt; done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
We can see that number of processes inside VE will not be grow up. We will see only increase of &amp;lt;code&amp;gt;numproc&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;kmemsize&amp;lt;/code&amp;gt; fail counters in &amp;lt;code&amp;gt;/proc/user_beancounters&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==== dentry cache eat up ====&lt;br /&gt;
FIXME&lt;br /&gt;
&lt;br /&gt;
=== CPU scheduler ===&lt;br /&gt;
Create 3 VEs:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# vzctl create 101&lt;br /&gt;
# vzctl create 102&lt;br /&gt;
# vzctl create 103&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Set VEs weights:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# vzctl set 101 --cpuunits 1000 --save&lt;br /&gt;
# vzctl set 102 --cpuunits 2000 --save&lt;br /&gt;
# vzctl set 103 --cpuunits 3000 --save&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We set next cpu sharing &amp;lt;code&amp;gt;VE101 : VE102 : VE103 = 1 : 2 : 3&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Run VEs:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# vzctl start 101&lt;br /&gt;
# vzctl start 102&lt;br /&gt;
# vzctl start 103&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Run busy loops in VEs:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# vzctl enter 101&lt;br /&gt;
[ve101]# while [ true ]; do true; done&lt;br /&gt;
# vzctl enter 102&lt;br /&gt;
[ve102]# while [ true ]; do true; done&lt;br /&gt;
# vzctl enter 103&lt;br /&gt;
[ve103]# while [ true ]; do true; done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Check in top that sharing works:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# top&lt;br /&gt;
COMMAND    %CPU&lt;br /&gt;
bash       48.0&lt;br /&gt;
bash       34.0&lt;br /&gt;
bash       17.5&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So, we see that CPU time is given to VEs in proportion  ~ 1 : 2 : 3.&lt;br /&gt;
&lt;br /&gt;
=== Disk quota ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# vzctl set VEID --diskspace 1048576:1153434 --save&lt;br /&gt;
# vzctl start VEID&lt;br /&gt;
# vzctl enter VEID&lt;br /&gt;
[ve]# dd if=/dev/zero of=/tmp/tmp.file bs=1048576 count=1000&lt;br /&gt;
dd: writing `/tmp/tmp.file': Disk quota exceeded&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Major</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Demo_scripts&amp;diff=2440</id>
		<title>Demo scripts</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Demo_scripts&amp;diff=2440"/>
		<updated>2006-10-30T16:13:34Z</updated>

		<summary type="html">&lt;p&gt;Major: /* Massive VE creation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The following demo scripts (scenarios) can be used to show advantages of OpenVZ.&lt;br /&gt;
&lt;br /&gt;
== Full VE lifecycle ==&lt;br /&gt;
&lt;br /&gt;
Create VE, set IP, start, add user, enter, exec, show ps -axf output inside VE, stop, and destroy. It should take two minutes (&amp;quot;compare that to a time you need to deploy a new (non-virtual) server!&amp;quot;). During the demonstration, describe what's happening and why.&lt;br /&gt;
&lt;br /&gt;
Here are the example commands needed:&lt;br /&gt;
&lt;br /&gt;
 # VE=123&lt;br /&gt;
 # IP=10.1.1.123&lt;br /&gt;
 # sed -i &amp;quot;/$IP /d&amp;quot; ~/.ssh/&lt;br /&gt;
 # time vzctl create $VE --ostemplate fedora-core-5-i386-default&lt;br /&gt;
 # vzctl set $VE --ipadd $IP --hostname newVE --save&lt;br /&gt;
 # vzctl start $VE&lt;br /&gt;
 # vzctl exec $VE ps axf&lt;br /&gt;
 # vzctl set $VE --userpasswd guest:secret --save&lt;br /&gt;
 # ssh guest@$IP&lt;br /&gt;
 [newVE]# ps axf&lt;br /&gt;
 [newVE]# logout&lt;br /&gt;
 # vzctl stop $VE&lt;br /&gt;
 # vzctl destroy $VE&lt;br /&gt;
&lt;br /&gt;
== Massive VE creation ==&lt;br /&gt;
&lt;br /&gt;
Create/start 50 or 100 VEs in a shell loop. Shows fast deployment and high density.&lt;br /&gt;
&lt;br /&gt;
Here are the example commands needed:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# time for ((VE=200; VE&amp;lt;250; VE++)); do \&lt;br /&gt;
&amp;gt;  time vzctl create $VE --ostemplate fedora-core-5-i386-default; \&lt;br /&gt;
&amp;gt;  vzctl start $VE; \&lt;br /&gt;
&amp;gt; done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Massive VE load ==&lt;br /&gt;
&lt;br /&gt;
Use VEs from previous item — load those by &amp;lt;code&amp;gt;ab&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;http_load&amp;lt;/code&amp;gt;. This demo shows that multiple VEs are working just fine, with low response time etc.&lt;br /&gt;
&lt;br /&gt;
FIXME: commands, ab/http_load setup.&lt;br /&gt;
&lt;br /&gt;
== Live migration ==&lt;br /&gt;
&lt;br /&gt;
If you have two boxes, do &amp;quot;&amp;lt;code&amp;gt;vzmigrate --online&amp;lt;/code&amp;gt;&amp;quot; from one box to another. You can use, say, &amp;lt;code&amp;gt;xvnc&amp;lt;/code&amp;gt; in a VE and &amp;lt;code&amp;gt;vncclient&amp;lt;/code&amp;gt; to connect to it, then run &amp;lt;code&amp;gt;xscreensaver-demo&amp;lt;/code&amp;gt; and while the picture is moving do a live migration. You'll show &amp;lt;code&amp;gt;xscreensaver&amp;lt;/code&amp;gt; stalls for a few seconds but then keeps running — on another machine! That looks amazing, to say at least.&lt;br /&gt;
&lt;br /&gt;
FIXME: commands, setup, vnc template.&lt;br /&gt;
&lt;br /&gt;
== Resource management ==&lt;br /&gt;
Below scenarios aims to show how OpenVZ resource management works.&lt;br /&gt;
&lt;br /&gt;
=== [[UBC]] protection ===&lt;br /&gt;
&lt;br /&gt;
==== fork() bomb ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# while [ true ]; do \&lt;br /&gt;
&amp;gt;     while [ true ]; do \&lt;br /&gt;
&amp;gt;         echo &amp;quot; &amp;quot; &amp;gt; /dev/null;&lt;br /&gt;
&amp;gt;     done &amp;amp;&lt;br /&gt;
&amp;gt; done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
We can see that number of processes inside VE will not be grow up. We will see only increase of &amp;lt;code&amp;gt;numproc&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;kmemsize&amp;lt;/code&amp;gt; fail counters in &amp;lt;code&amp;gt;/proc/user_beancounters&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==== dentry cache eat up ====&lt;br /&gt;
FIXME&lt;br /&gt;
&lt;br /&gt;
=== CPU scheduler ===&lt;br /&gt;
Create 3 VEs:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# vzctl create 101&lt;br /&gt;
# vzctl create 102&lt;br /&gt;
# vzctl create 103&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Set VEs weights:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# vzctl set 101 --cpuunits 1000 --save&lt;br /&gt;
# vzctl set 102 --cpuunits 2000 --save&lt;br /&gt;
# vzctl set 103 --cpuunits 3000 --save&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We set next cpu sharing &amp;lt;code&amp;gt;VE101 : VE102 : VE103 = 1 : 2 : 3&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Run VEs:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# vzctl start 101&lt;br /&gt;
# vzctl start 102&lt;br /&gt;
# vzctl start 103&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Run busy loops in VEs:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# vzctl enter 101&lt;br /&gt;
[ve101]# while [ true ]; do true; done&lt;br /&gt;
# vzctl enter 102&lt;br /&gt;
[ve102]# while [ true ]; do true; done&lt;br /&gt;
# vzctl enter 103&lt;br /&gt;
[ve103]# while [ true ]; do true; done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Check in top that sharing works:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# top&lt;br /&gt;
COMMAND    %CPU&lt;br /&gt;
bash       48.0&lt;br /&gt;
bash       34.0&lt;br /&gt;
bash       17.5&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So, we see that CPU time is given to VEs in proportion  ~ 1 : 2 : 3.&lt;br /&gt;
&lt;br /&gt;
=== Disk quota ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# vzctl set VEID --diskspace 1048576:1153434 --save&lt;br /&gt;
# vzctl start VEID&lt;br /&gt;
# vzctl enter VEID&lt;br /&gt;
[ve]# dd if=/dev/zero of=/tmp/tmp.file bs=1048576 count=1000&lt;br /&gt;
dd: writing `/tmp/tmp.file': Disk quota exceeded&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Major</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Demo_scripts&amp;diff=2439</id>
		<title>Demo scripts</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Demo_scripts&amp;diff=2439"/>
		<updated>2006-10-30T16:08:06Z</updated>

		<summary type="html">&lt;p&gt;Major: /* fork() bomb */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The following demo scripts (scenarios) can be used to show advantages of OpenVZ.&lt;br /&gt;
&lt;br /&gt;
== Full VE lifecycle ==&lt;br /&gt;
&lt;br /&gt;
Create VE, set IP, start, add user, enter, exec, show ps -axf output inside VE, stop, and destroy. It should take two minutes (&amp;quot;compare that to a time you need to deploy a new (non-virtual) server!&amp;quot;). During the demonstration, describe what's happening and why.&lt;br /&gt;
&lt;br /&gt;
Here are the example commands needed:&lt;br /&gt;
&lt;br /&gt;
 # VE=123&lt;br /&gt;
 # IP=10.1.1.123&lt;br /&gt;
 # sed -i &amp;quot;/$IP /d&amp;quot; ~/.ssh/&lt;br /&gt;
 # time vzctl create $VE --ostemplate fedora-core-5-i386-default&lt;br /&gt;
 # vzctl set $VE --ipadd $IP --hostname newVE --save&lt;br /&gt;
 # vzctl start $VE&lt;br /&gt;
 # vzctl exec $VE ps axf&lt;br /&gt;
 # vzctl set $VE --userpasswd guest:secret --save&lt;br /&gt;
 # ssh guest@$IP&lt;br /&gt;
 [newVE]# ps axf&lt;br /&gt;
 [newVE]# logout&lt;br /&gt;
 # vzctl stop $VE&lt;br /&gt;
 # vzctl destroy $VE&lt;br /&gt;
&lt;br /&gt;
== Massive VE creation ==&lt;br /&gt;
&lt;br /&gt;
Create/start 50 or 100 VEs in a shell loop. Shows fast deployment and high density.&lt;br /&gt;
&lt;br /&gt;
Here are the example commands needed:&lt;br /&gt;
&lt;br /&gt;
 # VE=200&lt;br /&gt;
 # time while [ $VE -lt 250 ]; do \&lt;br /&gt;
 &amp;gt;  time vzctl create $VE --ostemplate fedora-core-5-i386-default; \&lt;br /&gt;
 &amp;gt;  vzctl start $VE; \&lt;br /&gt;
 &amp;gt;  let VE++; \&lt;br /&gt;
 &amp;gt; done&lt;br /&gt;
&lt;br /&gt;
== Massive VE load ==&lt;br /&gt;
&lt;br /&gt;
Use VEs from previous item — load those by &amp;lt;code&amp;gt;ab&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;http_load&amp;lt;/code&amp;gt;. This demo shows that multiple VEs are working just fine, with low response time etc.&lt;br /&gt;
&lt;br /&gt;
FIXME: commands, ab/http_load setup.&lt;br /&gt;
&lt;br /&gt;
== Live migration ==&lt;br /&gt;
&lt;br /&gt;
If you have two boxes, do &amp;quot;&amp;lt;code&amp;gt;vzmigrate --online&amp;lt;/code&amp;gt;&amp;quot; from one box to another. You can use, say, &amp;lt;code&amp;gt;xvnc&amp;lt;/code&amp;gt; in a VE and &amp;lt;code&amp;gt;vncclient&amp;lt;/code&amp;gt; to connect to it, then run &amp;lt;code&amp;gt;xscreensaver-demo&amp;lt;/code&amp;gt; and while the picture is moving do a live migration. You'll show &amp;lt;code&amp;gt;xscreensaver&amp;lt;/code&amp;gt; stalls for a few seconds but then keeps running — on another machine! That looks amazing, to say at least.&lt;br /&gt;
&lt;br /&gt;
FIXME: commands, setup, vnc template.&lt;br /&gt;
&lt;br /&gt;
== Resource management ==&lt;br /&gt;
Below scenarios aims to show how OpenVZ resource management works.&lt;br /&gt;
&lt;br /&gt;
=== [[UBC]] protection ===&lt;br /&gt;
&lt;br /&gt;
==== fork() bomb ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# while [ true ]; do \&lt;br /&gt;
&amp;gt;     while [ true ]; do \&lt;br /&gt;
&amp;gt;         echo &amp;quot; &amp;quot; &amp;gt; /dev/null;&lt;br /&gt;
&amp;gt;     done &amp;amp;&lt;br /&gt;
&amp;gt; done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
We can see that number of processes inside VE will not be grow up. We will see only increase of &amp;lt;code&amp;gt;numproc&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;kmemsize&amp;lt;/code&amp;gt; fail counters in &amp;lt;code&amp;gt;/proc/user_beancounters&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==== dentry cache eat up ====&lt;br /&gt;
FIXME&lt;br /&gt;
&lt;br /&gt;
=== CPU scheduler ===&lt;br /&gt;
Create 3 VEs:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# vzctl create 101&lt;br /&gt;
# vzctl create 102&lt;br /&gt;
# vzctl create 103&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Set VEs weights:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# vzctl set 101 --cpuunits 1000 --save&lt;br /&gt;
# vzctl set 102 --cpuunits 2000 --save&lt;br /&gt;
# vzctl set 103 --cpuunits 3000 --save&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We set next cpu sharing &amp;lt;code&amp;gt;VE101 : VE102 : VE103 = 1 : 2 : 3&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Run VEs:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# vzctl start 101&lt;br /&gt;
# vzctl start 102&lt;br /&gt;
# vzctl start 103&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Run busy loops in VEs:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# vzctl enter 101&lt;br /&gt;
[ve101]# while [ true ]; do true; done&lt;br /&gt;
# vzctl enter 102&lt;br /&gt;
[ve102]# while [ true ]; do true; done&lt;br /&gt;
# vzctl enter 103&lt;br /&gt;
[ve103]# while [ true ]; do true; done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Check in top that sharing works:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# top&lt;br /&gt;
COMMAND    %CPU&lt;br /&gt;
bash       48.0&lt;br /&gt;
bash       34.0&lt;br /&gt;
bash       17.5&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So, we see that CPU time is given to VEs in proportion  ~ 1 : 2 : 3.&lt;br /&gt;
&lt;br /&gt;
=== Disk quota ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# vzctl set VEID --diskspace 1048576:1153434 --save&lt;br /&gt;
# vzctl start VEID&lt;br /&gt;
# vzctl enter VEID&lt;br /&gt;
[ve]# dd if=/dev/zero of=/tmp/tmp.file bs=1048576 count=1000&lt;br /&gt;
dd: writing `/tmp/tmp.file': Disk quota exceeded&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Major</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Demo_scripts&amp;diff=2438</id>
		<title>Demo scripts</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Demo_scripts&amp;diff=2438"/>
		<updated>2006-10-30T14:30:52Z</updated>

		<summary type="html">&lt;p&gt;Major: /* CPU scheduler */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The following demo scripts (scenarios) can be used to show advantages of OpenVZ.&lt;br /&gt;
&lt;br /&gt;
== Full VE lifecycle ==&lt;br /&gt;
&lt;br /&gt;
Create VE, set IP, start, add user, enter, exec, show ps -axf output inside VE, stop, and destroy. It should take two minutes (&amp;quot;compare that to a time you need to deploy a new (non-virtual) server!&amp;quot;). During the demonstration, describe what's happening and why.&lt;br /&gt;
&lt;br /&gt;
Here are the example commands needed:&lt;br /&gt;
&lt;br /&gt;
 # VE=123&lt;br /&gt;
 # IP=10.1.1.123&lt;br /&gt;
 # sed -i &amp;quot;/$IP /d&amp;quot; ~/.ssh/&lt;br /&gt;
 # time vzctl create $VE --ostemplate fedora-core-5-i386-default&lt;br /&gt;
 # vzctl set $VE --ipadd $IP --hostname newVE --save&lt;br /&gt;
 # vzctl start $VE&lt;br /&gt;
 # vzctl exec $VE ps axf&lt;br /&gt;
 # vzctl set $VE --userpasswd guest:secret --save&lt;br /&gt;
 # ssh guest@$IP&lt;br /&gt;
 [newVE]# ps axf&lt;br /&gt;
 [newVE]# logout&lt;br /&gt;
 # vzctl stop $VE&lt;br /&gt;
 # vzctl destroy $VE&lt;br /&gt;
&lt;br /&gt;
== Massive VE creation ==&lt;br /&gt;
&lt;br /&gt;
Create/start 50 or 100 VEs in a shell loop. Shows fast deployment and high density.&lt;br /&gt;
&lt;br /&gt;
Here are the example commands needed:&lt;br /&gt;
&lt;br /&gt;
 # VE=200&lt;br /&gt;
 # time while [ $VE -lt 250 ]; do \&lt;br /&gt;
 &amp;gt;  time vzctl create $VE --ostemplate fedora-core-5-i386-default; \&lt;br /&gt;
 &amp;gt;  vzctl start $VE; \&lt;br /&gt;
 &amp;gt;  let VE++; \&lt;br /&gt;
 &amp;gt; done&lt;br /&gt;
&lt;br /&gt;
== Massive VE load ==&lt;br /&gt;
&lt;br /&gt;
Use VEs from previous item — load those by &amp;lt;code&amp;gt;ab&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;http_load&amp;lt;/code&amp;gt;. This demo shows that multiple VEs are working just fine, with low response time etc.&lt;br /&gt;
&lt;br /&gt;
FIXME: commands, ab/http_load setup.&lt;br /&gt;
&lt;br /&gt;
== Live migration ==&lt;br /&gt;
&lt;br /&gt;
If you have two boxes, do &amp;quot;&amp;lt;code&amp;gt;vzmigrate --online&amp;lt;/code&amp;gt;&amp;quot; from one box to another. You can use, say, &amp;lt;code&amp;gt;xvnc&amp;lt;/code&amp;gt; in a VE and &amp;lt;code&amp;gt;vncclient&amp;lt;/code&amp;gt; to connect to it, then run &amp;lt;code&amp;gt;xscreensaver-demo&amp;lt;/code&amp;gt; and while the picture is moving do a live migration. You'll show &amp;lt;code&amp;gt;xscreensaver&amp;lt;/code&amp;gt; stalls for a few seconds but then keeps running — on another machine! That looks amazing, to say at least.&lt;br /&gt;
&lt;br /&gt;
FIXME: commands, setup, vnc template.&lt;br /&gt;
&lt;br /&gt;
== Resource management ==&lt;br /&gt;
Below scenarios aims to show how OpenVZ resource management works.&lt;br /&gt;
&lt;br /&gt;
=== [[UBC]] protection ===&lt;br /&gt;
&lt;br /&gt;
==== fork() bomb ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# while [ true ]; do \&lt;br /&gt;
&amp;gt;     while [ true ]; do \&lt;br /&gt;
&amp;gt;         echo &amp;quot; &amp;quot; &amp;gt; /dev/null;&lt;br /&gt;
&amp;gt;     done &amp;amp;&lt;br /&gt;
&amp;gt; done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== dentry cache eat up ====&lt;br /&gt;
FIXME&lt;br /&gt;
&lt;br /&gt;
=== CPU scheduler ===&lt;br /&gt;
Create 3 VEs:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# vzctl create 101&lt;br /&gt;
# vzctl create 102&lt;br /&gt;
# vzctl create 103&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Set VEs weights:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# vzctl set 101 --cpuunits 1000 --save&lt;br /&gt;
# vzctl set 102 --cpuunits 2000 --save&lt;br /&gt;
# vzctl set 103 --cpuunits 3000 --save&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We set next cpu sharing &amp;lt;code&amp;gt;VE101 : VE102 : VE103 = 1 : 2 : 3&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Run VEs:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# vzctl start 101&lt;br /&gt;
# vzctl start 102&lt;br /&gt;
# vzctl start 103&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Run busy loops in VEs:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# vzctl enter 101&lt;br /&gt;
[ve101]# while [ true ]; do true; done&lt;br /&gt;
# vzctl enter 102&lt;br /&gt;
[ve102]# while [ true ]; do true; done&lt;br /&gt;
# vzctl enter 103&lt;br /&gt;
[ve103]# while [ true ]; do true; done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Check in top that sharing works:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# top&lt;br /&gt;
COMMAND    %CPU&lt;br /&gt;
bash       48.0&lt;br /&gt;
bash       34.0&lt;br /&gt;
bash       17.5&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So, we see that CPU time is given to VEs in proportion  ~ 1 : 2 : 3.&lt;br /&gt;
&lt;br /&gt;
=== Disk quota ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# vzctl set VEID --diskspace 1048576:1153434 --save&lt;br /&gt;
# vzctl start VEID&lt;br /&gt;
# vzctl enter VEID&lt;br /&gt;
[ve]# dd if=/dev/zero of=/tmp/tmp.file bs=1048576 count=1000&lt;br /&gt;
dd: writing `/tmp/tmp.file': Disk quota exceeded&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Major</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Demo_scripts&amp;diff=2434</id>
		<title>Demo scripts</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Demo_scripts&amp;diff=2434"/>
		<updated>2006-10-27T12:02:30Z</updated>

		<summary type="html">&lt;p&gt;Major: /* CPU scheduler */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The following demo scripts (scenarios) can be used to show advantages of OpenVZ.&lt;br /&gt;
&lt;br /&gt;
== Full VE lifecycle ==&lt;br /&gt;
&lt;br /&gt;
Create VE, set IP, start, add user, enter, exec, show ps -axf output inside VE, stop, and destroy. It should take two minutes (&amp;quot;compare that to a time you need to deploy a new (non-virtual) server!&amp;quot;). During the demonstration, describe what's happening and why.&lt;br /&gt;
&lt;br /&gt;
Here are the example commands needed:&lt;br /&gt;
&lt;br /&gt;
 # VE=123&lt;br /&gt;
 # IP=10.1.1.123&lt;br /&gt;
 # sed -i &amp;quot;/$IP /d&amp;quot; ~/.ssh/&lt;br /&gt;
 # time vzctl create $VE --ostemplate fedora-core-5-i386-default&lt;br /&gt;
 # vzctl set $VE --ipadd $IP --hostname newVE --save&lt;br /&gt;
 # vzctl start $VE&lt;br /&gt;
 # vzctl exec $VE ps axf&lt;br /&gt;
 # vzctl set $VE --userpasswd guest:secret --save&lt;br /&gt;
 # ssh guest@$IP&lt;br /&gt;
 [newVE]# ps axf&lt;br /&gt;
 [newVE]# logout&lt;br /&gt;
 # vzctl stop $VE&lt;br /&gt;
 # vzctl destroy $VE&lt;br /&gt;
&lt;br /&gt;
== Massive VE creation ==&lt;br /&gt;
&lt;br /&gt;
Create/start 50 or 100 VEs in a shell loop. Shows fast deployment and high density.&lt;br /&gt;
&lt;br /&gt;
Here are the example commands needed:&lt;br /&gt;
&lt;br /&gt;
 # VE=200&lt;br /&gt;
 # time while [ $VE -lt 250 ]; do \&lt;br /&gt;
 &amp;gt;  time vzctl create $VE --ostemplate fedora-core-5-i386-default; \&lt;br /&gt;
 &amp;gt;  vzctl start $VE; \&lt;br /&gt;
 &amp;gt;  let VE++; \&lt;br /&gt;
 &amp;gt; done&lt;br /&gt;
&lt;br /&gt;
== Massive VE load ==&lt;br /&gt;
&lt;br /&gt;
Use VEs from previous item — load those by &amp;lt;code&amp;gt;ab&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;http_load&amp;lt;/code&amp;gt;. This demo shows that multiple VEs are working just fine, with low response time etc.&lt;br /&gt;
&lt;br /&gt;
FIXME: commands, ab/http_load setup.&lt;br /&gt;
&lt;br /&gt;
== Live migration ==&lt;br /&gt;
&lt;br /&gt;
If you have two boxes, do &amp;quot;&amp;lt;code&amp;gt;vzmigrate --online&amp;lt;/code&amp;gt;&amp;quot; from one box to another. You can use, say, &amp;lt;code&amp;gt;xvnc&amp;lt;/code&amp;gt; in a VE and &amp;lt;code&amp;gt;vncclient&amp;lt;/code&amp;gt; to connect to it, then run &amp;lt;code&amp;gt;xscreensaver-demo&amp;lt;/code&amp;gt; and while the picture is moving do a live migration. You'll show &amp;lt;code&amp;gt;xscreensaver&amp;lt;/code&amp;gt; stalls for a few seconds but then keeps running — on another machine! That looks amazing, to say at least.&lt;br /&gt;
&lt;br /&gt;
FIXME: commands, setup, vnc template.&lt;br /&gt;
&lt;br /&gt;
== Resource management ==&lt;br /&gt;
Below scenarios aims to show how OpenVZ resource management works.&lt;br /&gt;
&lt;br /&gt;
=== [[UBC]] protection ===&lt;br /&gt;
&lt;br /&gt;
==== fork() bomb ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# while [ true ]; do \&lt;br /&gt;
&amp;gt;     while [ true ]; do \&lt;br /&gt;
&amp;gt;         echo &amp;quot; &amp;quot; &amp;gt; /dev/null;&lt;br /&gt;
&amp;gt;     done &amp;amp;&lt;br /&gt;
&amp;gt; done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== dentry cache eat up ====&lt;br /&gt;
FIXME&lt;br /&gt;
&lt;br /&gt;
=== CPU scheduler ===&lt;br /&gt;
Create 3 VPSes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# vzctl create 101&lt;br /&gt;
# vzctl create 102&lt;br /&gt;
# vzctl create 103&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Set VPS weights:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# vzctl set 101 --cpuunits 1000 --save&lt;br /&gt;
# vzctl set 102 --cpuunits 2000 --save&lt;br /&gt;
# vzctl set 103 --cpuunits 3000 --save&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We set next cpu sharing &amp;lt;code&amp;gt;VPS101 : VPS102 : VPS103 = 1 : 2 : 3&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Run VPSes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# vzctl start 101&lt;br /&gt;
# vzctl start 102&lt;br /&gt;
# vzctl start 103&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Run busy loops in VPSes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# vzctl enter 101&lt;br /&gt;
[ve101]# while [ true ]; do true; done&lt;br /&gt;
# vzctl enter 102&lt;br /&gt;
[ve102]# while [ true ]; do true; done&lt;br /&gt;
# vzctl enter 103&lt;br /&gt;
[ve103]# while [ true ]; do true; done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Check in top that sharing works:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# top&lt;br /&gt;
COMMAND    %CPU&lt;br /&gt;
bash       48.0&lt;br /&gt;
bash       34.0&lt;br /&gt;
bash       17.5&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Disk quota ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# vzctl set VEID --diskspace 1048576:1153434 --save&lt;br /&gt;
# vzctl start VEID&lt;br /&gt;
# vzctl enter VEID&lt;br /&gt;
[ve]# dd if=/dev/zero of=/tmp/tmp.file bs=1048576 count=1000&lt;br /&gt;
dd: writing `/tmp/tmp.file': Disk quota exceeded&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Major</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Demo_scripts&amp;diff=2433</id>
		<title>Demo scripts</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Demo_scripts&amp;diff=2433"/>
		<updated>2006-10-27T10:47:13Z</updated>

		<summary type="html">&lt;p&gt;Major: /* Disk quota */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The following demo scripts (scenarios) can be used to show advantages of OpenVZ.&lt;br /&gt;
&lt;br /&gt;
== Full VE lifecycle ==&lt;br /&gt;
&lt;br /&gt;
Create VE, set IP, start, add user, enter, exec, show ps -axf output inside VE, stop, and destroy. It should take two minutes (&amp;quot;compare that to a time you need to deploy a new (non-virtual) server!&amp;quot;). During the demonstration, describe what's happening and why.&lt;br /&gt;
&lt;br /&gt;
Here are the example commands needed:&lt;br /&gt;
&lt;br /&gt;
 # VE=123&lt;br /&gt;
 # IP=10.1.1.123&lt;br /&gt;
 # sed -i &amp;quot;/$IP /d&amp;quot; ~/.ssh/&lt;br /&gt;
 # time vzctl create $VE --ostemplate fedora-core-5-i386-default&lt;br /&gt;
 # vzctl set $VE --ipadd $IP --hostname newVE --save&lt;br /&gt;
 # vzctl start $VE&lt;br /&gt;
 # vzctl exec $VE ps axf&lt;br /&gt;
 # vzctl set $VE --userpasswd guest:secret --save&lt;br /&gt;
 # ssh guest@$IP&lt;br /&gt;
 [newVE]# ps axf&lt;br /&gt;
 [newVE]# logout&lt;br /&gt;
 # vzctl stop $VE&lt;br /&gt;
 # vzctl destroy $VE&lt;br /&gt;
&lt;br /&gt;
== Massive VE creation ==&lt;br /&gt;
&lt;br /&gt;
Create/start 50 or 100 VEs in a shell loop. Shows fast deployment and high density.&lt;br /&gt;
&lt;br /&gt;
Here are the example commands needed:&lt;br /&gt;
&lt;br /&gt;
 # VE=200&lt;br /&gt;
 # time while [ $VE -lt 250 ]; do \&lt;br /&gt;
 &amp;gt;  time vzctl create $VE --ostemplate fedora-core-5-i386-default; \&lt;br /&gt;
 &amp;gt;  vzctl start $VE; \&lt;br /&gt;
 &amp;gt;  let VE++; \&lt;br /&gt;
 &amp;gt; done&lt;br /&gt;
&lt;br /&gt;
== Massive VE load ==&lt;br /&gt;
&lt;br /&gt;
Use VEs from previous item — load those by &amp;lt;code&amp;gt;ab&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;http_load&amp;lt;/code&amp;gt;. This demo shows that multiple VEs are working just fine, with low response time etc.&lt;br /&gt;
&lt;br /&gt;
FIXME: commands, ab/http_load setup.&lt;br /&gt;
&lt;br /&gt;
== Live migration ==&lt;br /&gt;
&lt;br /&gt;
If you have two boxes, do &amp;quot;&amp;lt;code&amp;gt;vzmigrate --online&amp;lt;/code&amp;gt;&amp;quot; from one box to another. You can use, say, &amp;lt;code&amp;gt;xvnc&amp;lt;/code&amp;gt; in a VE and &amp;lt;code&amp;gt;vncclient&amp;lt;/code&amp;gt; to connect to it, then run &amp;lt;code&amp;gt;xscreensaver-demo&amp;lt;/code&amp;gt; and while the picture is moving do a live migration. You'll show &amp;lt;code&amp;gt;xscreensaver&amp;lt;/code&amp;gt; stalls for a few seconds but then keeps running — on another machine! That looks amazing, to say at least.&lt;br /&gt;
&lt;br /&gt;
FIXME: commands, setup, vnc template.&lt;br /&gt;
&lt;br /&gt;
== Resource management ==&lt;br /&gt;
Below scenarios aims to show how OpenVZ resource management works.&lt;br /&gt;
&lt;br /&gt;
=== [[UBC]] protection ===&lt;br /&gt;
&lt;br /&gt;
==== fork() bomb ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# while [ true ]; do \&lt;br /&gt;
&amp;gt;     while [ true ]; do \&lt;br /&gt;
&amp;gt;         echo &amp;quot; &amp;quot; &amp;gt; /dev/null;&lt;br /&gt;
&amp;gt;     done &amp;amp;&lt;br /&gt;
&amp;gt; done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== dentry cache eat up ====&lt;br /&gt;
FIXME&lt;br /&gt;
&lt;br /&gt;
=== CPU scheduler ===&lt;br /&gt;
FIXME&lt;br /&gt;
&lt;br /&gt;
=== Disk quota ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# vzctl set VEID --diskspace 1048576:1153434 --save&lt;br /&gt;
# vzctl start VEID&lt;br /&gt;
# vzctl enter VEID&lt;br /&gt;
[ve]# dd if=/dev/zero of=/tmp/tmp.file bs=1048576 count=1000&lt;br /&gt;
dd: writing `/tmp/tmp.file': Disk quota exceeded&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Major</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Demo_scripts&amp;diff=2431</id>
		<title>Demo scripts</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Demo_scripts&amp;diff=2431"/>
		<updated>2006-10-27T10:04:23Z</updated>

		<summary type="html">&lt;p&gt;Major: /* edit fork() bomb */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The following demo scripts (scenarios) can be used to show advantages of OpenVZ.&lt;br /&gt;
&lt;br /&gt;
== Full VE lifecycle ==&lt;br /&gt;
&lt;br /&gt;
Create VE, set IP, start, add user, enter, exec, show ps -axf output inside VE, stop, and destroy. It should take two minutes (&amp;quot;compare that to a time you need to deploy a new (non-virtual) server!&amp;quot;). During the demonstration, describe what's happening and why.&lt;br /&gt;
&lt;br /&gt;
Here are the example commands needed:&lt;br /&gt;
&lt;br /&gt;
 # VE=123&lt;br /&gt;
 # IP=10.1.1.123&lt;br /&gt;
 # sed -i &amp;quot;/$IP /d&amp;quot; ~/.ssh/&lt;br /&gt;
 # time vzctl create $VE --ostemplate fedora-core-5-i386-default&lt;br /&gt;
 # vzctl set $VE --ipadd $IP --hostname newVE --save&lt;br /&gt;
 # vzctl start $VE&lt;br /&gt;
 # vzctl exec $VE ps axf&lt;br /&gt;
 # vzctl set $VE --userpasswd guest:secret --save&lt;br /&gt;
 # ssh guest@$IP&lt;br /&gt;
 [newVE]# ps axf&lt;br /&gt;
 [newVE]# logout&lt;br /&gt;
 # vzctl stop $VE&lt;br /&gt;
 # vzctl destroy $VE&lt;br /&gt;
&lt;br /&gt;
== Massive VE creation ==&lt;br /&gt;
&lt;br /&gt;
Create/start 50 or 100 VEs in a shell loop. Shows fast deployment and high density.&lt;br /&gt;
&lt;br /&gt;
Here are the example commands needed:&lt;br /&gt;
&lt;br /&gt;
 # VE=200&lt;br /&gt;
 # time while [ $VE -lt 250 ]; do \&lt;br /&gt;
 &amp;gt;  time vzctl create $VE --ostemplate fedora-core-5-i386-default; \&lt;br /&gt;
 &amp;gt;  vzctl start $VE; \&lt;br /&gt;
 &amp;gt;  let VE++; \&lt;br /&gt;
 &amp;gt; done&lt;br /&gt;
&lt;br /&gt;
== Massive VE load ==&lt;br /&gt;
&lt;br /&gt;
Use VEs from previous item — load those by &amp;lt;code&amp;gt;ab&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;http_load&amp;lt;/code&amp;gt;. This demo shows that multiple VEs are working just fine, with low response time etc.&lt;br /&gt;
&lt;br /&gt;
FIXME: commands, ab/http_load setup.&lt;br /&gt;
&lt;br /&gt;
== Live migration ==&lt;br /&gt;
&lt;br /&gt;
If you have two boxes, do &amp;quot;&amp;lt;code&amp;gt;vzmigrate --online&amp;lt;/code&amp;gt;&amp;quot; from one box to another. You can use, say, &amp;lt;code&amp;gt;xvnc&amp;lt;/code&amp;gt; in a VE and &amp;lt;code&amp;gt;vncclient&amp;lt;/code&amp;gt; to connect to it, then run &amp;lt;code&amp;gt;xscreensaver-demo&amp;lt;/code&amp;gt; and while the picture is moving do a live migration. You'll show &amp;lt;code&amp;gt;xscreensaver&amp;lt;/code&amp;gt; stalls for a few seconds but then keeps running — on another machine! That looks amazing, to say at least.&lt;br /&gt;
&lt;br /&gt;
FIXME: commands, setup, vnc template.&lt;br /&gt;
&lt;br /&gt;
== Resource management ==&lt;br /&gt;
Below scenarios aims to show how OpenVZ resource management works.&lt;br /&gt;
&lt;br /&gt;
=== [[UBC]] protection ===&lt;br /&gt;
&lt;br /&gt;
==== fork() bomb ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# while [ true ]; do \&lt;br /&gt;
&amp;gt;     while [ true ]; do \&lt;br /&gt;
&amp;gt;         echo &amp;quot; &amp;quot; &amp;gt; /dev/null;&lt;br /&gt;
&amp;gt;     done &amp;amp;&lt;br /&gt;
&amp;gt; done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== dentry cache eat up ====&lt;br /&gt;
FIXME&lt;br /&gt;
&lt;br /&gt;
=== CPU scheduler ===&lt;br /&gt;
FIXME&lt;br /&gt;
&lt;br /&gt;
=== Disk quota ===&lt;br /&gt;
FIXME&lt;/div&gt;</summary>
		<author><name>Major</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Demo_scripts&amp;diff=2422</id>
		<title>Demo scripts</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Demo_scripts&amp;diff=2422"/>
		<updated>2006-10-25T13:36:55Z</updated>

		<summary type="html">&lt;p&gt;Major: Small corrections&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Demo scripts which can be used to show advantages of OpenVZ:&lt;br /&gt;
&lt;br /&gt;
* Full VE lifecycle (create, set ip, start, add user, enter, exec, show ps -axf output inside VE), stop, destroy). It should take two minutes (&amp;quot;compare that to a time you need to deploy a new (non-virtual) server!&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
* Massive VE creation. Create/start 50 or 100 VEs in a shell loop. Shows fast deployment and high density.&lt;br /&gt;
&lt;br /&gt;
* Use VEs from prev. item — load those by &amp;lt;code&amp;gt;ab&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;http_load&amp;lt;/code&amp;gt; — shows that many VE are working quite fine, with low response time etc.&lt;br /&gt;
&lt;br /&gt;
* If you have two boxes, do &amp;quot;&amp;lt;code&amp;gt;vzmigrate --online&amp;lt;/code&amp;gt;&amp;quot; from one box to another. You can use, say, &amp;lt;code&amp;gt;xvnc&amp;lt;/code&amp;gt; in a VE and &amp;lt;code&amp;gt;vncclient&amp;lt;/code&amp;gt; to connect to it, then run &amp;lt;code&amp;gt;xscreensaver-demo&amp;lt;/code&amp;gt; and while the picture is moving do a live migration. You'll show &amp;lt;code&amp;gt;xscreensaver&amp;lt;/code&amp;gt; stalls for a few seconds but then keeps running — on another machine! That looks amazing, to say at least.&lt;/div&gt;</summary>
		<author><name>Major</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Demo_scripts&amp;diff=2421</id>
		<title>Demo scripts</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Demo_scripts&amp;diff=2421"/>
		<updated>2006-10-25T13:32:35Z</updated>

		<summary type="html">&lt;p&gt;Major: Demo scripts&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Demo scripts which can be used to show advantages of OpenVZ:&lt;br /&gt;
&lt;br /&gt;
* Full VE lifecycle (create, set ip, start, add user, enter, exec, show ps -axf output inside VE), stop, destroy). It should take two minutes (&amp;quot;compare that to a time you need to deploy a new (non-virtual) server!&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
* Massive VE creation. Create/start 50 or 100 VEs in a shell loop. Shows fast deployment and high density.&lt;br /&gt;
&lt;br /&gt;
* Use VEs from prev. item -- load those by ab or http_load -- shows that many VE are working quite fine, with low response time etc.&lt;br /&gt;
&lt;br /&gt;
* If you have two boxes, do vzmigrate --online from one box to another. You can use, say, xvnc in a VE and vncclient to connect to it, then run xscreensaver-demo and while the picture is moving do a live migration. You'll show xscreensaver stalls for a few seconds but then keeps running -- on another machine! That looks amazing, to say at least.&lt;/div&gt;</summary>
		<author><name>Major</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Checkpointing_internals&amp;diff=2220</id>
		<title>Checkpointing internals</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Checkpointing_internals&amp;diff=2220"/>
		<updated>2006-09-06T13:51:55Z</updated>

		<summary type="html">&lt;p&gt;Major: Checkpointing internals&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Checkpointing internals =&lt;br /&gt;
&lt;br /&gt;
Process checkpoint/restore consists of two phases. The first phase is to save the running state of a process. This usually includes register set, address space, allocated resources, and other process private data. The second phase is to re-construct the original running process from the saved image and resume the execution exactly the point, where it was suspended.&lt;br /&gt;
&lt;br /&gt;
There are several problems with existing checkpoint/restore systems. First, except some written-from-scratch process migration operating systems (such as Sprite), they can not preserve opened network connections. Second, general-purpose operating systems such as Unix were not designed to support process migration, so checkpoint/restore systems built on top of existing OSes usually only support a limited set of applications. Third, all systems do not guarantee processes restoration on other side because of resource conflicts (e.g. there can be a process with such pid). OpenVZ gives a unique chance to solve all those problems and to implement full-fledged universal checkpoint/restore system, its intrinsic capability to isolate and to virtualize groups of processes allows to define a self-consistent state essentially for any configurations of VEs using all the kinds of resources, which are available inside VE.&lt;br /&gt;
&lt;br /&gt;
The primary contributions of OpenVZ checkpoint/restore system are:&lt;br /&gt;
* No run time overhead besides actual checkpoint/restore &lt;br /&gt;
* Network connection migration support &lt;br /&gt;
* Virtualization of pids&lt;br /&gt;
* Image size minimization&lt;br /&gt;
&lt;br /&gt;
== Modular Structure ==&lt;br /&gt;
&lt;br /&gt;
Main functionality (checkpoint and restore functions) implemented as two separate kernel modules:&lt;br /&gt;
&lt;br /&gt;
'''vzcpt''' - provides checkpoint functionality;&lt;br /&gt;
&lt;br /&gt;
'''vzrst''' - provides restore functionality.&lt;br /&gt;
&lt;br /&gt;
Checkpoint and restore are controlled via &amp;lt;code&amp;gt;ioctl()&amp;lt;/code&amp;gt; calls on regular pseudo-files &amp;lt;code&amp;gt;/proc/cpt&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/proc/rst&amp;lt;/code&amp;gt; created in &amp;lt;code&amp;gt;procfs&amp;lt;/code&amp;gt;. Ioctl commands are listed in &amp;lt;code&amp;gt;&amp;lt;linux/cpt_ioctl.h&amp;gt;&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Checkpoint Module ==&lt;br /&gt;
&lt;br /&gt;
Checkpoint module (&amp;lt;code&amp;gt;vzcpt&amp;lt;/code&amp;gt;) provides the following general functionality by ioctls:&lt;br /&gt;
* &amp;lt;code&amp;gt;CPT_SUSPEND&amp;lt;/code&amp;gt; – moving processes to frozen state (VE suspending);&lt;br /&gt;
* &amp;lt;code&amp;gt;CPT_DUMP&amp;lt;/code&amp;gt; – collecting and saving all VPS data to image file (VE dumping);&lt;br /&gt;
* &amp;lt;code&amp;gt;CPT_KILL&amp;lt;/code&amp;gt; – killing VE;&lt;br /&gt;
* &amp;lt;code&amp;gt;CPT_RESUME&amp;lt;/code&amp;gt; – resuming processes from frozen state to running state.&lt;br /&gt;
&lt;br /&gt;
Freezing all the processes before saving VE state is necessary because processes inside VE can be connected via IPC, can send signals, share files, virtual memory and another objects. To guarantee self-consistency of saved state all the processes must be suspended and network connections must be stopped.&lt;br /&gt;
&lt;br /&gt;
== Restore Module ==&lt;br /&gt;
&lt;br /&gt;
Restore module (&amp;lt;code&amp;gt;vzrst&amp;lt;/code&amp;gt;) provides the following general functionality by ioctls:&lt;br /&gt;
* &amp;lt;code&amp;gt;CPT_UNDUMP&amp;lt;/code&amp;gt; – reconstructing processes and VE private data from image file (VE undumping);&lt;br /&gt;
* &amp;lt;code&amp;gt;CPT_KILL&amp;lt;/code&amp;gt; – killing VE;&lt;br /&gt;
* &amp;lt;code&amp;gt;CPT_RESUME&amp;lt;/code&amp;gt; – resuming processes from frozen state to running state.&lt;br /&gt;
&lt;br /&gt;
After reconstructing all necessary kernel structures processes are placed in an uninterruptible state, so that processes cannot run before reconstruction of full VE will be completed. Only after the whole VE is restored it is possible to resume network connectivity and to wake up the processes. It is necessary to emphasize, it is impossible to reduce latency of migration waking up some processes before all the VE is restored.&lt;br /&gt;
&lt;br /&gt;
== Virtualization of pids ==&lt;br /&gt;
&lt;br /&gt;
A process has an identifier (&amp;lt;code&amp;gt;PID&amp;lt;/code&amp;gt;), which is unaltered while process lifecycle. So, it is necessary to restore pid after migration. But it is impossible to do this if there is another process with the same pid. This problem was solved in the following way.&lt;br /&gt;
&lt;br /&gt;
Processes created inside VE are assigned with pair of pids: one is traditional pid, i.e. a global value which uniquely identifies the process in host OS. Another is virtual pid which is unique only inside VE but can be used by several VEs. Processes inside VE communicate using only their virtual pids, so that provided virtual pids are preserved while checkpointing/restore the whole VE can be transferred to another hardware node not causing pid conflicts. Migrated processes get another global pids but this pid is invisible from inside VE.&lt;br /&gt;
&lt;br /&gt;
Main drawback of this solution is that it is necessary to maintain mapping of virtual pids to global pids, which introduces additional overhead for all the syscalls using a pid as an argument or a return value (f.e. &amp;lt;code&amp;gt;kill()&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;wait4()&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;fork()&amp;lt;/code&amp;gt;). The overhead (~0.3%) is visible in the tests, which essentially do nothing but forking and stopping processes. This overhead appears only for online migrated VEs. There are no overhead at all for VEs, which never migrated.&lt;br /&gt;
&lt;br /&gt;
== Image size minimization ==&lt;br /&gt;
&lt;br /&gt;
CPT needs to save all the VE data to a file after VE was suspended and to transfer this file to the destination node before VE can be restored. It means that migration latency is proportional to total size of this image file. Though actual image sizes are surprisingly small for typical tasks&amp;lt;br&amp;gt;&lt;br /&gt;
2 Mb	– idle apache with 8 preforked children&amp;lt;br&amp;gt;&lt;br /&gt;
20 Mb	– screen + bash + cxoffice + winword + small document&amp;lt;br&amp;gt;&lt;br /&gt;
24 Mb	– screen + bash + mozilla + Java VM&amp;lt;br&amp;gt;&lt;br /&gt;
0.7 Mb	– screen + 1 bash&amp;lt;br&amp;gt;&lt;br /&gt;
3 Mb	– screen + 8 bashes in chain&amp;lt;br&amp;gt;&lt;br /&gt;
13 Mb	– screen + acroread on 7 Mb pdf file&amp;lt;br&amp;gt;&lt;br /&gt;
2 Mb	– running full Zeus-4.2r4&amp;lt;br&amp;gt;&lt;br /&gt;
0.9 Mb	– sshd with one forked child and bash&amp;lt;br&amp;gt;&lt;br /&gt;
3 Mb	– mysqld&amp;lt;br&amp;gt;&lt;br /&gt;
1 Mb	– postgresql server&amp;lt;br&amp;gt;&lt;br /&gt;
2.5 Mb	– CommuniGate Pro 4.0.6&amp;lt;br&amp;gt;&lt;br /&gt;
25 Mb	– phhttps with LinuxThreads. Doc set is /var/www/manual/*.en&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
they can be much larger, when the VE to be migrated runs processes which use lots of virtual memory.&lt;br /&gt;
&lt;br /&gt;
== Limitations ==&lt;br /&gt;
&lt;br /&gt;
CPT implements migration of almost all kernel objects, but not all of them. When CPT sees that a process in VE makes use of an unimplemented facility it does not allow migration.&lt;br /&gt;
&lt;br /&gt;
Another kind of limitation is when a process uses some facilities, which are not available at target node. For example, a process can auto detect CPU at runtime and start using some instructions specific for this CPU: SSE2, CMOV instructions etc. In this case migration is possible only when destination node also supports those facilities.&lt;br /&gt;
&lt;br /&gt;
Third kind of limitations is caused by applications, which use non-virtual capabilities, which are directly accessible at user level. F.e. a process could use CPU timestamps to calculate some timings, or it could use SMP CPU ID to optimize memory accesses. In this case completely transparent migration would be possible only using virtualization techniques provided by the latest Intel CPUs and it is not even clear, whether unavoidable overhead introduced at this level of virtualization is worth of maintaining such exotic applications.&lt;/div&gt;</summary>
		<author><name>Major</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Checkpointing_and_live_migration&amp;diff=2216</id>
		<title>Checkpointing and live migration</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Checkpointing_and_live_migration&amp;diff=2216"/>
		<updated>2006-09-06T12:42:23Z</updated>

		<summary type="html">&lt;p&gt;Major: Checkpointing and live migration&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;CPT is an extension to OpenVZ kernel which allows to save full state of a running VE and to restore it later on the same or on a different host in a way transparent for running applications and network connections. This technique has several applications, the most important being live (zero-downtime) migration of VEs and taking an instant snapshot of a running VE for later resume, i. e. CheckPoinTing.&lt;br /&gt;
&lt;br /&gt;
Before CPT, it was only possible to migrate a VE through shutdown and subsequent reboot. The procedure not only introduces quite a long downtime of network services, it is not transparent for clients using the VE, making impossible migration, when clients runs some tasks which are not tolerant to shutdowns.&lt;br /&gt;
&lt;br /&gt;
Comparing to this old scheme, CPT allows to migrate a VE in a way, essentially invisible both for users of this VE and for external clients, using network services located inside VE. It still introduces a short delay in service, required for actual checkpoint/restore of the processes, but this delay is indistinguishable from a short interruption of network connectivity.&lt;br /&gt;
&lt;br /&gt;
== Online migration ==&lt;br /&gt;
&lt;br /&gt;
There is special utility vzmigrate in OpenVZ distribution intended to support VE migration. With it help one can perform live (zero-downtime) migration, i.e. while migration VPS hangs for a while and after migration it continues work as though nothing has happened. Online migration can be performed by&lt;br /&gt;
&amp;lt;pre&amp;gt;vzmigrate --online &amp;lt;host&amp;gt; VEID&amp;lt;/pre&amp;gt;&lt;br /&gt;
command. During online migration all VE private data saved to an image file, which is transferred to target host.&lt;br /&gt;
&lt;br /&gt;
== Manual Checkpoint and Restore Functions ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;vzmigrate&amp;lt;/code&amp;gt; is not strictly required to perform online migration. &amp;lt;code&amp;gt;vzctl&amp;lt;/code&amp;gt; utility, accompanied with some file system backup tools, provides enough of power to do all the tasks.&lt;br /&gt;
&lt;br /&gt;
VE can be checkpointed with command:&lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl chkpnt VEID --dumpfile &amp;lt;path&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
This command saves all the state of running VE to dump file and stops the VE. If the option &amp;lt;code&amp;gt;--dumpfile&amp;lt;/code&amp;gt; is not set, &amp;lt;code&amp;gt;vzctl&amp;lt;/code&amp;gt; uses default path &amp;lt;code&amp;gt;/var/tmp/Dump.VEID&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
After this it is possible to restore the VE exactly in the same state executing:&lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl restore VEID --dumpfile &amp;lt;path&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
If dump file and file system is transferred to another HW node, the same command can restore VE there with the same success.&lt;br /&gt;
&lt;br /&gt;
It is critical requirement that file system at the moment of restore must be identical to the file system at the moment of checkpointing. If this requirement is not held, depending on severity of changes process of restoration can be aborted or the processes inside VE can see this as an external corruption of open files. When VE is restored on the same node where it was checkpointed, it is enough not to touch file system accessible by the VE. When VE is transferred to another node it is necessary to synchronize VE file system before restore. &amp;lt;code&amp;gt;vzctl&amp;lt;/code&amp;gt; does not provide this functionality and external tools (f.e. &amp;lt;code&amp;gt;rsync&amp;lt;/code&amp;gt;) are required.&lt;br /&gt;
&lt;br /&gt;
== Step-by-step Checkpoint and Restore ==&lt;br /&gt;
&lt;br /&gt;
Process of checkpointing can be performed by stages. It consists of three steps.&lt;br /&gt;
&lt;br /&gt;
First step – suspending VE. At this stage CPT moves all the processes to special beforehand known state and stops VE network interfaces. This stage can be done by&lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl chkpnt VEID --suspend&amp;lt;/pre&amp;gt;&lt;br /&gt;
command. Second step – dumping VE. At this phase CPT saves state of processes and global state of VE to image file. All the process private data need to be saved: address space, register set, opened files/pipes/sockets, System V IPC structures, current working directory, signal handlers, timers, terminal settings, user identities (uid, gid, etc), process identities (pid, pgrp, sid, etc), rlimit and other data. This stage can be done by&lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl chkpnt VEID --dump --dumpfile &amp;lt;path&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
command. Third step – killing or resuming processes. If the migration succeeds VE can be stopped with the command:&lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl chkpnt VEID --kill&amp;lt;/pre&amp;gt;&lt;br /&gt;
If migration failed by some reason or if the goal was taking a snapshot of VE state for later restore, CPT can resume VE with:&lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl chkpnt VEID --resume&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Process of restoring consists of two steps. The first step is to restore processes and to leave them in a special frozen state. After this step processes are ready to continue execution, however, in some cases CPT has to do some operations after process is woken up, therefore CPT sets process return point to function in our module. This stage can be done by&lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl restore VEID --undump --dumpfile &amp;lt;path&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
command. Second step – waking up processes or killing them if restore process failed. After CPT wakes up process, it performs necessary operations in our function and continues execution. This stages can be done by&lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl restore VEID --resume&lt;br /&gt;
vzctl restore VEID --kill&amp;lt;/pre&amp;gt;&lt;br /&gt;
commands.&lt;/div&gt;</summary>
		<author><name>Major</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Talk:Virtual_Ethernet_device&amp;diff=1885</id>
		<title>Talk:Virtual Ethernet device</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Talk:Virtual_Ethernet_device&amp;diff=1885"/>
		<updated>2006-07-17T12:42:37Z</updated>

		<summary type="html">&lt;p&gt;Major: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Under Common Configurations -&amp;gt; Simple configuration -&amp;gt; Configure devices in VE0, the example shows&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# echo 1 &amp;gt; /proc/sys/net/ipv4/conf/eth0/forwarding&lt;br /&gt;
[host-node]# echo 1 &amp;gt; /proc/sys/net/ipv4/conf/eth0/proxy_arp&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
being run on the host node, VE0.  I don't think this can be correct, because eth0 exists in VE 101, not VE0.&lt;br /&gt;
-- {{unsigned|Andrex}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
: This is correct, because we need to enable forwarding on both network interfaces in VE0 (host-node eth0 and veth101.0) to allow the network packets on one network interface (host-node eth0) to be forwarded to another network interface (veth101.0). The same thing with proxy ARP, we need to enable it on host-node eth0 and veth101.0 network interfaces. --[[User:Major|Major]] 08:42, 17 July 2006 (EDT)&lt;/div&gt;</summary>
		<author><name>Major</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Virtual_Ethernet_device&amp;diff=1790</id>
		<title>Virtual Ethernet device</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Virtual_Ethernet_device&amp;diff=1790"/>
		<updated>2006-06-28T14:39:00Z</updated>

		<summary type="html">&lt;p&gt;Major: Differences between venet and veth moved to separate topic&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Virtual ethernet device is ethernet device which can be used inside a [[VE]]. Unlike&lt;br /&gt;
venet network device, veth device has a MAC address. Due to this, it can be used in configurations, when veth is bridged to &lt;br /&gt;
ethX or other device and VPS user fully setups his networking himself, &lt;br /&gt;
including IPs, gateways etc.&lt;br /&gt;
&lt;br /&gt;
Virtual ethernet device consist of two ethernet devices - one in [[VE0]] and another one &lt;br /&gt;
in VE. These devices are connected to each other, so if a packet goes to one&lt;br /&gt;
device it will come out from the other device.&lt;br /&gt;
&lt;br /&gt;
== Virtual ethernet device usage ==&lt;br /&gt;
&lt;br /&gt;
=== Adding veth to a VE ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vzctl set &amp;lt;VEID&amp;gt; --veth_add &amp;lt;dev_name&amp;gt;,&amp;lt;dev_addr&amp;gt;,&amp;lt;ve_dev_name&amp;gt;,&amp;lt;ve_dev_addr&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Here &lt;br /&gt;
* &amp;lt;tt&amp;gt;dev_name&amp;lt;/tt&amp;gt; is ethernet device name in the [[VE0|host system]]&lt;br /&gt;
* &amp;lt;tt&amp;gt;dev_addr&amp;lt;/tt&amp;gt; is its MAC address&lt;br /&gt;
* &amp;lt;tt&amp;gt;ve_dev_name&amp;lt;/tt&amp;gt; is an ethernet device name in the VE&lt;br /&gt;
* &amp;lt;tt&amp;gt;ve_dev_addr&amp;lt;/tt&amp;gt; is its MAC address&lt;br /&gt;
&lt;br /&gt;
MAC addresses must be entered in XX:XX:XX:XX:XX:XX format. Note that this option&lt;br /&gt;
is incremental, so devices are added to already existing ones.&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vzctl set 101 --veth_add veth101.0,00:12:34:56:78:9A,eth0,00:12:34:56:78:9B --save&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
After executing this command &amp;lt;tt&amp;gt;veth&amp;lt;/tt&amp;gt; device will be created for VE 101 and veth configuration will be saved to a VE configuration file.&lt;br /&gt;
Host-side ethernet device will have &amp;lt;tt&amp;gt;veth101.0&amp;lt;/tt&amp;gt; name and &amp;lt;tt&amp;gt;00:12:34:56:78:9A&amp;lt;/tt&amp;gt; MAC address.&lt;br /&gt;
VE-side ethernet device will have &amp;lt;tt&amp;gt;eth0&amp;lt;/tt&amp;gt; name and &amp;lt;tt&amp;gt;00:12:34:56:78:9B&amp;lt;/tt&amp;gt; MAC address.&lt;br /&gt;
{{Note|Use random MAC addresses. Do not use MAC addresses of real eth devices, beacuse this can lead to collisions.}}&lt;br /&gt;
&lt;br /&gt;
=== Removing veth from a VE ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vzctl set &amp;lt;VEID&amp;gt; --veth_del &amp;lt;dev_name&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Here &amp;lt;tt&amp;gt;dev_name&amp;lt;/tt&amp;gt; is the ethernet device name in the [[VE0|host system]].&lt;br /&gt;
&lt;br /&gt;
==== Example ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vzctl set 101 --veth_del veth101.0 --save&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
After executing this command veth device with host-side ethernet name veth101.0 will be removed from VE 101 and veth configuration will be updated in VE config file.&lt;br /&gt;
&lt;br /&gt;
== Common configurations with virtual ethernet devices ==&lt;br /&gt;
Module &amp;lt;tt&amp;gt;vzethdev&amp;lt;/tt&amp;gt; must be loaded to operate with veth devices.&lt;br /&gt;
&lt;br /&gt;
=== Simple configuration with virtual ethernet device ===&lt;br /&gt;
&lt;br /&gt;
==== Start a VE ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# vzctl start 101&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Add veth device to VE ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# vzctl set 101 --veth_add veth101.0,00:12:34:56:78:9A,eth0,00:12:34:56:78:9B --save&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Configure devices in VE0 ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# ifconfig veth101.0 0&lt;br /&gt;
[host-node]# echo 1 &amp;gt; /proc/sys/net/ipv4/conf/veth101.0/forwarding&lt;br /&gt;
[host-node]# echo 1 &amp;gt; /proc/sys/net/ipv4/conf/veth101.0/proxy_arp&lt;br /&gt;
[host-node]# echo 1 &amp;gt; /proc/sys/net/ipv4/conf/eth0/forwarding&lt;br /&gt;
[host-node]# echo 1 &amp;gt; /proc/sys/net/ipv4/conf/eth0/proxy_arp&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Configure device in VE ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# vzctl enter 101&lt;br /&gt;
[ve-101]# /sbin/ifconfig eth0 0&lt;br /&gt;
[ve-101]# /sbin/ip addr add 192.168.0.101 dev eth0&lt;br /&gt;
[ve-101]# /sbin/ip route add default dev eth0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Add route in [[VE0]] ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# ip route add 192.168.0.101 dev veth101.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Virtual ethernet device with IPv6 ===&lt;br /&gt;
&lt;br /&gt;
==== Start [[VE]] ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# vzctl start 101&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Add veth device to [[VE]] ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# vzctl set 101 --veth_add veth101.0,00:12:34:56:78:9A,eth0,00:12:34:56:78:9B --save&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Configure devices in [[VE0]] ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# ifconfig veth101.0 0&lt;br /&gt;
[host-node]# echo 1 &amp;gt; /proc/sys/net/ipv6/conf/veth101.0/forwarding&lt;br /&gt;
[host-node]# echo 1 &amp;gt; /proc/sys/net/ipv6/conf/eth0/forwarding&lt;br /&gt;
[host-node]# echo 1 &amp;gt; /proc/sys/net/ipv6/conf/all/forwarding&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Configure device in [[VE]] ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# vzctl enter 101&lt;br /&gt;
[ve-101]# /sbin/ifconfig eth0 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Start router advertisement daemon (radvd) for IPv6 in VE0 ====&lt;br /&gt;
First you need to edit radvd configuration file. Here is a simple example of &amp;lt;tt&amp;gt;/etc/radv.conf&amp;lt;/tt&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
interface veth101.0&lt;br /&gt;
{&lt;br /&gt;
        AdvSendAdvert on;&lt;br /&gt;
        MinRtrAdvInterval 3;&lt;br /&gt;
        MaxRtrAdvInterval 10;&lt;br /&gt;
        AdvHomeAgentFlag off;&lt;br /&gt;
&lt;br /&gt;
        prefix 3ffe:2400:0:0::/64&lt;br /&gt;
        {&lt;br /&gt;
                AdvOnLink on;&lt;br /&gt;
                AdvAutonomous on;&lt;br /&gt;
                AdvRouterAddr off;&lt;br /&gt;
        };&lt;br /&gt;
};&lt;br /&gt;
&lt;br /&gt;
interface eth0&lt;br /&gt;
{&lt;br /&gt;
        AdvSendAdvert on;&lt;br /&gt;
        MinRtrAdvInterval 3;&lt;br /&gt;
        MaxRtrAdvInterval 10;&lt;br /&gt;
        AdvHomeAgentFlag off;&lt;br /&gt;
&lt;br /&gt;
        prefix 3ffe:0302:0011:0002::/64&lt;br /&gt;
        {&lt;br /&gt;
                AdvOnLink on;&lt;br /&gt;
                AdvAutonomous on;&lt;br /&gt;
                AdvRouterAddr off;&lt;br /&gt;
        };&lt;br /&gt;
};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, start radvd:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# /etc/init.d/radvd start&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Add IPv6 addresses to devices in [[VE0]] ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# ip addr add dev veth101.0 3ffe:2400::212:34ff:fe56:789a/64&lt;br /&gt;
[host-node]# ip addr add dev eth0 3ffe:0302:0011:0002:211:22ff:fe33:4455/64&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Virtual ethernet devices can be joined in one bridge ===&lt;br /&gt;
Perform steps 1 - 4 from Simple configuration chapter for several VEs and/or veth devices&lt;br /&gt;
&lt;br /&gt;
==== Create bridge device ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# brctl addbr vzbr0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Add veth devices to bridge ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# brctl addif vzbr0 veth101.0&lt;br /&gt;
...&lt;br /&gt;
[host-node]# brctl addif vzbr0 veth101.n&lt;br /&gt;
[host-node]# brctl addif vzbr0 veth102.0&lt;br /&gt;
...&lt;br /&gt;
...&lt;br /&gt;
[host-node]# brctl addif vzbr0 vethXXX.N&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Configure bridge device ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# ifconfig vzbr0 0&lt;br /&gt;
[host-node]# echo 1 &amp;gt; /proc/sys/net/ipv4/conf/vzbr0/forwarding&lt;br /&gt;
[host-node]# echo 1 &amp;gt; /proc/sys/net/ipv4/conf/vzbr0/proxy_arp&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Add routes in [[VE0]] ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# ip route add 192.168.101.1 dev vzbr0&lt;br /&gt;
...&lt;br /&gt;
[host-node]# ip route add 192.168.101.n dev vzbr0&lt;br /&gt;
[host-node]# ip route add 192.168.102.1 dev vzbr0&lt;br /&gt;
...&lt;br /&gt;
...&lt;br /&gt;
[host-node]# ip route add 192.168.XXX.N dev vzbr0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Thus you'll have more convinient configuration, i.e. all routes to VEs will be through this bridge and VEs can communicate with each other even without these routes.&lt;br /&gt;
&lt;br /&gt;
=== Virtual ethernet devices + VLAN ===&lt;br /&gt;
This configuration can be done by adding vlan device to the previous configuration.&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [http://www.tldp.org/HOWTO/Linux+IPv6-HOWTO/hints-daemons-radvd.html Linux IPv6 HOWTO, a chapter about radvd]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Networking]]&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Major</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Differences_between_venet_and_veth&amp;diff=1789</id>
		<title>Differences between venet and veth</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Differences_between_venet_and_veth&amp;diff=1789"/>
		<updated>2006-06-28T14:38:06Z</updated>

		<summary type="html">&lt;p&gt;Major: Category changed&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Differences between venet and veth =&lt;br /&gt;
* veth allows broadcasts in VE, so you can use even dhcp server inside VE or samba server with domain broadcasts or other such stuff.&lt;br /&gt;
* veth has some security implications, so is not recommended in untrusted environments like HSP. This is due to broadcasts, traffic sniffing, possible IP collisions etc. i.e. VE user can actually ruin your ethernet network with such direct access to ethernet layer.&lt;br /&gt;
* With venet device, only node administrator can assign an IP to a VE. With veth device, network settings can be fully done on VE side. VE should setup correct GW, IP/mask etc and node admin then can only choose where your traffic goes.&lt;br /&gt;
* veth devices can be bridged together and/or with other devices. For example, in host system admin can bridge veth from 2 VEs with some VLAN eth0.X. In this case, these 2 VEs will be connected to this VLAN.&lt;br /&gt;
* venet device is a bit faster and more efficient.&lt;br /&gt;
* With veth devices IPv6 auto generates an address from MAC.&lt;br /&gt;
&lt;br /&gt;
The brief summary:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;text-align: center;&amp;quot;&lt;br /&gt;
|+ '''Differences between veth and venet'''&lt;br /&gt;
! Feature !! veth !! venet&lt;br /&gt;
|-&lt;br /&gt;
! MAC address&lt;br /&gt;
| {{yes}} || {{no}}&lt;br /&gt;
|-&lt;br /&gt;
! Broadcasts inside VE&lt;br /&gt;
| {{yes}} || {{no}}&lt;br /&gt;
|-&lt;br /&gt;
! Traffic sniffing&lt;br /&gt;
| {{yes}} || {{no}}&lt;br /&gt;
|-&lt;br /&gt;
! Network security&lt;br /&gt;
| low &amp;lt;ref&amp;gt;Due to broadcasts, sniffing and possible IP collisions etc.&amp;lt;/ref&amp;gt; || hi&lt;br /&gt;
|-                         &lt;br /&gt;
! Can be used in bridges&lt;br /&gt;
| {{yes}} || {{no}}&lt;br /&gt;
|-&lt;br /&gt;
! Performance&lt;br /&gt;
| fast || fastest&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Networking]]&lt;/div&gt;</summary>
		<author><name>Major</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Virtual_network_device&amp;diff=1788</id>
		<title>Virtual network device</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Virtual_network_device&amp;diff=1788"/>
		<updated>2006-06-28T14:37:33Z</updated>

		<summary type="html">&lt;p&gt;Major: Category changed&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Vitual network device (venet) is the default network device for [[VE]]. This network device looks like a ppp connection between [[VE]] and [[VE0|host system]]. It does packet switching based on IP header.&lt;br /&gt;
&lt;br /&gt;
venet is created automatically on [[VE]] start. vzctl scripts setup appropriate IP and settings on venet inside VPS.&lt;br /&gt;
&lt;br /&gt;
=  Virtual network device usage =&lt;br /&gt;
&lt;br /&gt;
== Adding IP address to a VE ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vzctl set &amp;lt;VEID&amp;gt; --ipadd &amp;lt;IP1&amp;gt;[,&amp;lt;IP2&amp;gt;,...] [--save]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Note|This option is incremental, so IP addresses are added to already existing ones.}}&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vzctl set 101 --ipadd 10.0.0.1 --save&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
After executing this command IP address 10.0.0.1 will be added to VE 101 and IP configuration will be saved to a VE configuration file.&lt;br /&gt;
&lt;br /&gt;
== Removing IP address from a VE ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vzctl set &amp;lt;VEID&amp;gt; --ipdel &amp;lt;IP1&amp;gt;[,&amp;lt;IP2&amp;gt;,...] [--save]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vzctl set 101 --ipadd 10.0.0.1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
After executing this command IP address 10.0.0.1 will be removed from VE 101, but IP configuration will not be changed in VE config file. And after VE reboot IP address 10.0.0.1 will be assigned to this VE again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Networking]]&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Major</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Differences_between_venet_and_veth&amp;diff=1787</id>
		<title>Differences between venet and veth</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Differences_between_venet_and_veth&amp;diff=1787"/>
		<updated>2006-06-28T14:34:27Z</updated>

		<summary type="html">&lt;p&gt;Major: Differences between venet and veth&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Differences between venet and veth =&lt;br /&gt;
* veth allows broadcasts in VE, so you can use even dhcp server inside VE or samba server with domain broadcasts or other such stuff.&lt;br /&gt;
* veth has some security implications, so is not recommended in untrusted environments like HSP. This is due to broadcasts, traffic sniffing, possible IP collisions etc. i.e. VE user can actually ruin your ethernet network with such direct access to ethernet layer.&lt;br /&gt;
* With venet device, only node administrator can assign an IP to a VE. With veth device, network settings can be fully done on VE side. VE should setup correct GW, IP/mask etc and node admin then can only choose where your traffic goes.&lt;br /&gt;
* veth devices can be bridged together and/or with other devices. For example, in host system admin can bridge veth from 2 VEs with some VLAN eth0.X. In this case, these 2 VEs will be connected to this VLAN.&lt;br /&gt;
* venet device is a bit faster and more efficient.&lt;br /&gt;
* With veth devices IPv6 auto generates an address from MAC.&lt;br /&gt;
&lt;br /&gt;
The brief summary:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;text-align: center;&amp;quot;&lt;br /&gt;
|+ '''Differences between veth and venet'''&lt;br /&gt;
! Feature !! veth !! venet&lt;br /&gt;
|-&lt;br /&gt;
! MAC address&lt;br /&gt;
| {{yes}} || {{no}}&lt;br /&gt;
|-&lt;br /&gt;
! Broadcasts inside VE&lt;br /&gt;
| {{yes}} || {{no}}&lt;br /&gt;
|-&lt;br /&gt;
! Traffic sniffing&lt;br /&gt;
| {{yes}} || {{no}}&lt;br /&gt;
|-&lt;br /&gt;
! Network security&lt;br /&gt;
| low &amp;lt;ref&amp;gt;Due to broadcasts, sniffing and possible IP collisions etc.&amp;lt;/ref&amp;gt; || hi&lt;br /&gt;
|-                         &lt;br /&gt;
! Can be used in bridges&lt;br /&gt;
| {{yes}} || {{no}}&lt;br /&gt;
|-&lt;br /&gt;
! Performance&lt;br /&gt;
| fast || fastest&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Major</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Virtual_network_device&amp;diff=1786</id>
		<title>Virtual network device</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Virtual_network_device&amp;diff=1786"/>
		<updated>2006-06-28T14:32:25Z</updated>

		<summary type="html">&lt;p&gt;Major: Virtual network device page added&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Vitual network device (venet) is the default network device for [[VE]]. This network device looks like a ppp connection between [[VE]] and [[VE0|host system]]. It does packet switching based on IP header.&lt;br /&gt;
&lt;br /&gt;
venet is created automatically on [[VE]] start. vzctl scripts setup appropriate IP and settings on venet inside VPS.&lt;br /&gt;
&lt;br /&gt;
=  Virtual network device usage =&lt;br /&gt;
&lt;br /&gt;
== Adding IP address to a VE ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vzctl set &amp;lt;VEID&amp;gt; --ipadd &amp;lt;IP1&amp;gt;[,&amp;lt;IP2&amp;gt;,...] [--save]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Note|This option is incremental, so IP addresses are added to already existing ones.}}&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vzctl set 101 --ipadd 10.0.0.1 --save&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
After executing this command IP address 10.0.0.1 will be added to VE 101 and IP configuration will be saved to a VE configuration file.&lt;br /&gt;
&lt;br /&gt;
== Removing IP address from a VE ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vzctl set &amp;lt;VEID&amp;gt; --ipdel &amp;lt;IP1&amp;gt;[,&amp;lt;IP2&amp;gt;,...] [--save]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vzctl set 101 --ipadd 10.0.0.1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
After executing this command IP address 10.0.0.1 will be removed from VE 101, but IP configuration will not be changed in VE config file. And after VE reboot IP address 10.0.0.1 will be assigned to this VE again.&lt;/div&gt;</summary>
		<author><name>Major</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Virtual_Ethernet_device&amp;diff=1548</id>
		<title>Virtual Ethernet device</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Virtual_Ethernet_device&amp;diff=1548"/>
		<updated>2006-06-09T07:56:26Z</updated>

		<summary type="html">&lt;p&gt;Major: General info updated&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Virtual ethernet device is ethernet device which can be used inside a [[VE]]. Unlike&lt;br /&gt;
venet network device, veth device has a MAC address. Due to this, it can be used in configurations, when veth is bridged to &lt;br /&gt;
ethX or other device and VPS user fully setups his networking himself, &lt;br /&gt;
including IPs, gateways etc.&lt;br /&gt;
&lt;br /&gt;
Virtual ethernet device consist of two ethernet devices - one in [[VE0]] and another one &lt;br /&gt;
in VE. These devices are connected to each other, so if a packet goes to one&lt;br /&gt;
device it will come out from the other device.&lt;br /&gt;
&lt;br /&gt;
List of differences with venet device:&lt;br /&gt;
- veth allows broadcasts in VPS, so you can use even dhcp server&lt;br /&gt;
   inside VPS or samba server with domain broadcasts or other such stuff.&lt;br /&gt;
- veth has some security implications, so is not recommended in&lt;br /&gt;
   untrusted environments like HSP. This is due to broadcasts,&lt;br /&gt;
   traffic sniffing, possible IP collisions etc. i.e. VPS user can&lt;br /&gt;
   actually ruin your ethernet network with such direct access to&lt;br /&gt;
   ethernet layer.&lt;br /&gt;
- with venet device, only node administrator can assign IP to VPS.&lt;br /&gt;
   With veth device, network settings can be fully done on VPS side.&lt;br /&gt;
   VPS should setup correct GW, IP/mask etc and node admin then can&lt;br /&gt;
   only choose where your traffic goes.&lt;br /&gt;
- veth devices can be bridged together and/or with other devices.&lt;br /&gt;
   For example, in host system admin can bridge veth from 2 VPSs with&lt;br /&gt;
   some VLAN eth0.X. In this case, these 2 VPSs will be connected to&lt;br /&gt;
   this VLAN.&lt;br /&gt;
- venet device is a bit faster and more efficient.&lt;br /&gt;
- with veth devices IPv6 auto generates an address from MAC.&lt;br /&gt;
&lt;br /&gt;
The brief summary:&lt;br /&gt;
                                veth                     venet&lt;br /&gt;
MAC address                     +                        -&lt;br /&gt;
broadcasts inside VPS           +                        -&lt;br /&gt;
traffic sniffing                +                        -&lt;br /&gt;
network security               low                       hi&lt;br /&gt;
                         (due to broadcasts,&lt;br /&gt;
                      sniffing and possible IP&lt;br /&gt;
                           collisions etc.)&lt;br /&gt;
can be used in bridges          +                        -&lt;br /&gt;
performance                    fast                    fastest&lt;br /&gt;
&lt;br /&gt;
== Virtual ethernet device usage ==&lt;br /&gt;
&lt;br /&gt;
=== Adding veth to a VE ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vzctl set &amp;lt;VEID&amp;gt; --veth_add &amp;lt;dev_name&amp;gt;,&amp;lt;dev_addr&amp;gt;,&amp;lt;ve_dev_name&amp;gt;,&amp;lt;ve_dev_addr&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Here &lt;br /&gt;
* &amp;lt;tt&amp;gt;dev_name&amp;lt;/tt&amp;gt; is ethernet device name in the [[VE0|host system]]&lt;br /&gt;
* &amp;lt;tt&amp;gt;dev_addr&amp;lt;/tt&amp;gt; is its MAC address&lt;br /&gt;
* &amp;lt;tt&amp;gt;ve_dev_name&amp;lt;/tt&amp;gt; is an ethernet device name in the VE&lt;br /&gt;
* &amp;lt;tt&amp;gt;ve_dev_addr&amp;lt;/tt&amp;gt; is its MAC address&lt;br /&gt;
&lt;br /&gt;
MAC addresses must be entered in XX:XX:XX:XX:XX:XX format. Note that this option&lt;br /&gt;
is incremental, so devices are added to already existing ones.&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vzctl set 101 --veth_add veth101.0,00:12:34:56:78:9A,eth0,00:12:34:56:78:9B --save&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
After executing this command &amp;lt;tt&amp;gt;veth&amp;lt;/tt&amp;gt; device will be created for VE 101 and veth configuration will be saved to a VE configuration file.&lt;br /&gt;
Host-side ethernet device will have &amp;lt;tt&amp;gt;veth101.0&amp;lt;/tt&amp;gt; name and &amp;lt;tt&amp;gt;00:12:34:56:78:9A&amp;lt;/tt&amp;gt; MAC address.&lt;br /&gt;
VE-side ethernet device will have &amp;lt;tt&amp;gt;eth0&amp;lt;/tt&amp;gt; name and &amp;lt;tt&amp;gt;00:12:34:56:78:9B&amp;lt;/tt&amp;gt; MAC address.&lt;br /&gt;
{{Note|Use random MAC addresses. Do not use MAC addresses of real eth devices, beacuse this can lead to collisions.}}&lt;br /&gt;
&lt;br /&gt;
=== Removing veth from a VE ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vzctl set &amp;lt;VEID&amp;gt; --veth_del &amp;lt;dev_name&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Here &amp;lt;tt&amp;gt;dev_name&amp;lt;/tt&amp;gt; is the ethernet device name in the [[VE0|host system]].&lt;br /&gt;
&lt;br /&gt;
==== Example ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vzctl set 101 --veth_del veth101.0 --save&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
After executing this command veth device with host-side ethernet name veth101.0 will be removed from VE 101 and veth configuration will be updated in VE config file.&lt;br /&gt;
&lt;br /&gt;
== Common configurations with virtual ethernet devices ==&lt;br /&gt;
Module &amp;lt;tt&amp;gt;vzethdev&amp;lt;/tt&amp;gt; must be loaded to operate with veth devices.&lt;br /&gt;
&lt;br /&gt;
=== Simple configuration with virtual ethernet device ===&lt;br /&gt;
&lt;br /&gt;
==== Start a VE ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# vzctl start 101&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Add veth device to VE ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# vzctl set 101 --veth_add veth101.0,00:12:34:56:78:9A,eth0,00:12:34:56:78:9B --save&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Configure devices in VE0 ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# ifconfig veth101.0 up&lt;br /&gt;
[host-node]# echo 1 &amp;gt; /proc/sys/net/ipv4/conf/veth101.0/forwarding&lt;br /&gt;
[host-node]# echo 1 &amp;gt; /proc/sys/net/ipv4/conf/veth101.0/proxy_arp&lt;br /&gt;
[host-node]# echo 1 &amp;gt; /proc/sys/net/ipv4/conf/eth0/forwarding&lt;br /&gt;
[host-node]# echo 1 &amp;gt; /proc/sys/net/ipv4/conf/eth0/proxy_arp&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Configure device in VE ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# vzctl enter 101&lt;br /&gt;
[ve-101]# /sbin/ifconfig eth0 up&lt;br /&gt;
[ve-101]# /sbin/ip addr add 192.168.0.101 dev eth0&lt;br /&gt;
[ve-101]# /sbin/ip route add default dev eth0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Add route in [[VE0]] ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# ip route add 192.168.0.101 dev veth101.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Virtual ethernet device with IPv6 ===&lt;br /&gt;
&lt;br /&gt;
==== Start [[VE]] ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# vzctl start 101&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Add veth device to [[VE]] ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# vzctl set 101 --veth_add veth101.0,00:12:34:56:78:9A,eth0,00:12:34:56:78:9B --save&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Configure devices in [[VE0]] ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# ifconfig veth101.0 up&lt;br /&gt;
[host-node]# echo 1 &amp;gt; /proc/sys/net/ipv6/conf/veth101.0/forwarding&lt;br /&gt;
[host-node]# echo 1 &amp;gt; /proc/sys/net/ipv6/conf/eth0/forwarding&lt;br /&gt;
[host-node]# echo 1 &amp;gt; /proc/sys/net/ipv6/conf/all/forwarding&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Configure device in [[VE]] ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# vzctl enter 101&lt;br /&gt;
[ve-101]# /sbin/ifconfig eth0 up&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Start router advertisement daemon (radvd) for IPv6 in VE0 ====&lt;br /&gt;
First you need to edit radvd configuration file. Here is a simple example of &amp;lt;tt&amp;gt;/etc/radv.conf&amp;lt;/tt&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
interface veth101.0&lt;br /&gt;
{&lt;br /&gt;
        AdvSendAdvert on;&lt;br /&gt;
        MinRtrAdvInterval 3;&lt;br /&gt;
        MaxRtrAdvInterval 10;&lt;br /&gt;
        AdvHomeAgentFlag off;&lt;br /&gt;
&lt;br /&gt;
        prefix 3ffe:2400:0:0::/64&lt;br /&gt;
        {&lt;br /&gt;
                AdvOnLink on;&lt;br /&gt;
                AdvAutonomous on;&lt;br /&gt;
                AdvRouterAddr off;&lt;br /&gt;
        };&lt;br /&gt;
};&lt;br /&gt;
&lt;br /&gt;
interface eth0&lt;br /&gt;
{&lt;br /&gt;
        AdvSendAdvert on;&lt;br /&gt;
        MinRtrAdvInterval 3;&lt;br /&gt;
        MaxRtrAdvInterval 10;&lt;br /&gt;
        AdvHomeAgentFlag off;&lt;br /&gt;
&lt;br /&gt;
        prefix 3ffe:0302:0011:0002::/64&lt;br /&gt;
        {&lt;br /&gt;
                AdvOnLink on;&lt;br /&gt;
                AdvAutonomous on;&lt;br /&gt;
                AdvRouterAddr off;&lt;br /&gt;
        };&lt;br /&gt;
};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, start radvd:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# /etc/init.d/radvd start&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Add IPv6 addresses to devices in [[VE0]] ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# ip addr add dev veth101.0 3ffe:2400::212:34ff:fe56:789a/64&lt;br /&gt;
[host-node]# ip addr add dev eth0 3ffe:0302:0011:0002:211:22ff:fe33:4455/64&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Virtual ethernet devices can be joined in one bridge ===&lt;br /&gt;
Perform steps 1 - 4 from Simple configuration chapter for several VEs and/or veth devices&lt;br /&gt;
&lt;br /&gt;
==== Create bridge device ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# brctl addbr vzbr0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Add veth devices to bridge ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# brctl addif vzbr0 veth101.0&lt;br /&gt;
...&lt;br /&gt;
[host-node]# brctl addif vzbr0 veth101.n&lt;br /&gt;
[host-node]# brctl addif vzbr0 veth102.0&lt;br /&gt;
...&lt;br /&gt;
...&lt;br /&gt;
[host-node]# brctl addif vzbr0 vethXXX.N&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Configure bridge device ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# ifconfig vzbr0 up&lt;br /&gt;
[host-node]# echo 1 &amp;gt; /proc/sys/net/ipv4/conf/vzbr0/forwarding&lt;br /&gt;
[host-node]# echo 1 &amp;gt; /proc/sys/net/ipv4/conf/vzbr0/proxy_arp&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Add routes in [[VE0]] ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# ip route add 192.168.101.1 dev vzbr0&lt;br /&gt;
...&lt;br /&gt;
[host-node]# ip route add 192.168.101.n dev vzbr0&lt;br /&gt;
[host-node]# ip route add 192.168.102.1 dev vzbr0&lt;br /&gt;
...&lt;br /&gt;
...&lt;br /&gt;
[host-node]# ip route add 192.168.XXX.N dev vzbr0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Thus you'll have more convinient configuration, i.e. all routes to VEs will be through this bridge and VEs can communicate with each other even without these routes.&lt;br /&gt;
&lt;br /&gt;
=== Virtual ethernet devices + VLAN ===&lt;br /&gt;
This configuration can be done by adding vlan device to the previous configuration.&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [http://www.tldp.org/HOWTO/Linux+IPv6-HOWTO/hints-daemons-radvd.html Linux IPv6 HOWTO, a chapter about radvd]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Networking]]&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Major</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Virtual_Ethernet_device&amp;diff=1518</id>
		<title>Virtual Ethernet device</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Virtual_Ethernet_device&amp;diff=1518"/>
		<updated>2006-06-07T14:03:17Z</updated>

		<summary type="html">&lt;p&gt;Major: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Virtual ethernet device is ethernet device which can be used inside a [[VE]]. Unlike&lt;br /&gt;
venet network device, veth device has a MAC address.&lt;br /&gt;
&lt;br /&gt;
Virtual ethernet device consist of two ethernet devices - one in [[VE0]] and another one &lt;br /&gt;
in VE. These devices are connected to each other, so if a packet goes to one&lt;br /&gt;
device it will come out from the other device.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Virtual ethernet device usage ==&lt;br /&gt;
&lt;br /&gt;
=== Adding veth to a VE ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vzctl set &amp;lt;VEID&amp;gt; --veth_add &amp;lt;dev_name&amp;gt;,&amp;lt;dev_addr&amp;gt;,&amp;lt;ve_dev_name&amp;gt;,&amp;lt;ve_dev_addr&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Here &lt;br /&gt;
* &amp;lt;tt&amp;gt;dev_name&amp;lt;/tt&amp;gt; is ethernet device name in the [[VE0|host system]]&lt;br /&gt;
* &amp;lt;tt&amp;gt;dev_addr&amp;lt;/tt&amp;gt; is its MAC address&lt;br /&gt;
* &amp;lt;tt&amp;gt;ve_dev_name&amp;lt;/tt&amp;gt; is an ethernet device name in the VE&lt;br /&gt;
* &amp;lt;tt&amp;gt;ve_dev_addr&amp;lt;/tt&amp;gt; is its MAC address&lt;br /&gt;
&lt;br /&gt;
MAC addresses must be entered in XX:XX:XX:XX:XX:XX format. Note that this option&lt;br /&gt;
is incremental, so devices are added to already existing ones.&lt;br /&gt;
&lt;br /&gt;
=== Removing veth from a VE ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vzctl set &amp;lt;VEID&amp;gt; --veth_del &amp;lt;dev_name&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Here &amp;lt;tt&amp;gt;dev_name&amp;lt;/tt&amp;gt; is the ethernet device name in the [[VE0|host system]].&lt;br /&gt;
&lt;br /&gt;
== Common configurations with virtual ethernet devices ==&lt;br /&gt;
&lt;br /&gt;
=== Virtual ethernet device can be used with IPv6 ===&lt;br /&gt;
You'll need to setup IPv6 address on ethernet device inside a VE, add default route inside a VE&lt;br /&gt;
and add route to this address via host-side veth in host system.&lt;br /&gt;
Do not forget to enable forwarding and proxy_arp on host-side veth device.&lt;br /&gt;
&lt;br /&gt;
=== Virtual ethernet devices can be joined in one bridge ===&lt;br /&gt;
Thus you'll have more convinient configuration, i.e. all routes to VEs will be&lt;br /&gt;
through this bridge and VEs can communicate with each other even without these routes.&lt;br /&gt;
&lt;br /&gt;
=== Virtual ethernet devices + VLAN ===&lt;br /&gt;
This configuration can be done by adding vlan device to the previous configuration.&lt;br /&gt;
&lt;br /&gt;
[[Category: Networking]]&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Major</name></author>
		
	</entry>
</feed>