<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.openvz.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Finist</id>
	<title>OpenVZ Virtuozzo Containers Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.openvz.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Finist"/>
	<link rel="alternate" type="text/html" href="https://wiki.openvz.org/Special:Contributions/Finist"/>
	<updated>2026-05-12T19:49:55Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.31.1</generator>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Tcache&amp;diff=23346</id>
		<title>Tcache</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Tcache&amp;diff=23346"/>
		<updated>2020-07-23T17:06:18Z</updated>

		<summary type="html">&lt;p&gt;Finist: tswap mentions removed&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Warning|This article describes Virtuozzo/OpenVZ version 7.}}&lt;br /&gt;
&lt;br /&gt;
== Brief tech explanation ==&lt;br /&gt;
&lt;br /&gt;
Transcendent file cache (tcache) is a driver for [https://www.kernel.org/doc/html/v4.18/vm/cleancache.html cleancache], which stores reclaimed pages in memory unmodified.&amp;lt;br&amp;gt;&lt;br /&gt;
Its purpose it to adopt pages evicted from a memory cgroup on '''local''' pressure (inside a Container), so that they can be fetched back later without costly disk accesses.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Detailed user-level explanation ==&lt;br /&gt;
&lt;br /&gt;
Tcache is intended to increase the overall Hardware Node performance only&lt;br /&gt;
on '''undercommitted''' Nodes, i.e. where total sum of all Containers memory limit values placed on the Node&lt;br /&gt;
is less than Hardware Node RAM size.&lt;br /&gt;
&lt;br /&gt;
=== Example usecase description ===&lt;br /&gt;
You have a Node with 1Tb of RAM, you run 500 Containers on it limited by 1Gb of memory each (no swap for simplicity).&amp;lt;br&amp;gt;&lt;br /&gt;
Let's consider Container to be more or less identical, similar load, similar activity inside.&amp;lt;br&amp;gt;&lt;br /&gt;
=&amp;gt; normally those Containers should use 500Gb of physical RAM at max, and 500Gb will be just free on the Node.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You think it's simple situation - OK, the node is underloaded, let's put more Containers there, but that's not always true -&amp;lt;br&amp;gt;&lt;br /&gt;
it depends on what is the bottleneck on the Node, which depends on real workload of Containers running on the Node.&amp;lt;br&amp;gt;&lt;br /&gt;
But most often in '''real life''' - the '''disk''' becomes the '''bottleneck''' first, not the RAM, not the CPU.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Let's assume all those Containers run, say, cPanel, which by default collect some stats every, say,&amp;lt;br&amp;gt;&lt;br /&gt;
15 minutes - the stat collection process is run via crontab.&lt;br /&gt;
&lt;br /&gt;
{{Note| randomizing times of crontab jobs - is a good idea, but who usually does this for Containers?&lt;br /&gt;
We did it for application templates we shipped in Virtuozzo, but lot of software is just installed and configured inside Containers, we cannot do this.&amp;lt;br&amp;gt;&lt;br /&gt;
And often Hosting Providers are not allowed to touch data in Containers - so most often cron jobs are not randomized.}}&lt;br /&gt;
&lt;br /&gt;
OK, it does not matter how, but let's assume we get such a workload - every, say, 15 minutes (it's important that data access it quite rare),&amp;lt;br&amp;gt;&lt;br /&gt;
each Container accesses many small files, let it be&lt;br /&gt;
&lt;br /&gt;
* just 100 small files to gather stats and save it somewhere.&lt;br /&gt;
* In 500 Containers. Simultaneously.&lt;br /&gt;
* In parallel with other regular i/o workload.&lt;br /&gt;
* On HDDs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It's nightmare for disk subsystem, if an HDD provides 100 IOPS, it will take 50000/100/60 = 8.(3) minutes(!) to handle.&amp;lt;br&amp;gt;&lt;br /&gt;
OK, there could be RAID, let it is able to handle 300 IOPS, it results in 2.(7) minutes, and we forgot about other regular i/o,&amp;lt;br&amp;gt;&lt;br /&gt;
so it means every 15 minutes, the Node became almost unresponsive for several minutes until it handles all that random i/o generated by stats collection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
But why '''every''' 15 minutes? The file read is performed once and the file content resides in the Container pagecache!&amp;lt;br&amp;gt;&lt;br /&gt;
That's true, but here comes '''''15 minutes''''' period. The larger period - the worse.&amp;lt;br&amp;gt;&lt;br /&gt;
If a Container is active enough, it just reads more and more files - website data, pictures, video clips, files of a fileserver, etc.&amp;lt;br&amp;gt;&lt;br /&gt;
The thing is in 15 minutes it's quite possible a Container reads more than its RAM limit (remember - only 1Gb in our case!), and thus all old pagecache is dropped, substituted with the fresh one.&amp;lt;br&amp;gt;&lt;br /&gt;
And thus in 15 minutes it's quite possible you'll have to read all those 100 files in each Container from disk.&lt;br /&gt;
&lt;br /&gt;
=== tcache saves our lifes === &lt;br /&gt;
And here comes tcache to save us: let's don't completely drop pagecache which is reclaimed from a Container (on '''local''' reclaim),&amp;lt;br&amp;gt;&lt;br /&gt;
but save this pagecache in a special cache (tcache) on the Host in case there is free RAM on the Host.&lt;br /&gt;
&lt;br /&gt;
And in 15 minutes when all Containers start to access lot of small files again - those files data will be get back into Container pagecache without reading from physical disk -&amp;lt;br&amp;gt;&lt;br /&gt;
viola, tcache saves IOPS, no Node stuck anymore.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Q/A section ==&lt;br /&gt;
&lt;br /&gt;
 '''Q:''' can a Container be so active (i.e. read so much from disk) that this &amp;quot;useful&amp;quot; pagecache is dropped even from tcache?&lt;br /&gt;
 '''A:''' yes. But tcache extends the &amp;quot;safe&amp;quot; period.&lt;br /&gt;
&lt;br /&gt;
 '''Q:''' mainstream? LXC/Proxmox?&lt;br /&gt;
 '''A:''' No, it's Virtuozzo/OpenVZ specific.&lt;br /&gt;
    &amp;quot;cleancache&amp;quot; - the base for tcache it in mainstream, it's used for Xen.&lt;br /&gt;
    But we (VZ) wrote a driver for it and use it for Containers as well.&lt;br /&gt;
&lt;br /&gt;
 '''Q:''' I use SSD, not HDD, does tcache help me?&lt;br /&gt;
 '''A:''' SSD can provide much more IOPS, thus the Node's performance increase caused by tcache is less significant, but still reading from RAM (tcache is in RAM) is faster than reading from SSD.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Managing tcache ==&lt;br /&gt;
Tcache is enabled for all Containers on a Node by default.&lt;br /&gt;
&lt;br /&gt;
=== Boot option ===&lt;br /&gt;
To disable tcache at boot time: '''&amp;quot;tcache.enabled=0&amp;quot;''' kernel option.&lt;br /&gt;
&lt;br /&gt;
=== Global on-the-fly switch ===&lt;br /&gt;
tcache can be disabled/enabled using following commands:&lt;br /&gt;
&lt;br /&gt;
 echo 'N' &amp;gt; /sys/module/tcache/parameters/active&lt;br /&gt;
 echo 'Y' &amp;gt; /sys/module/tcache/parameters/active&lt;br /&gt;
&lt;br /&gt;
=== Per Container switch ===&lt;br /&gt;
To disable tcache on the fly per Container:&lt;br /&gt;
&lt;br /&gt;
 echo 1 &amp;gt; /sys/fs/cgroup/memory/machine.slice/$CTID/memory.disable_cleancache&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;br /&gt;
[[Category: Kernel]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Tcache&amp;diff=23345</id>
		<title>Tcache</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Tcache&amp;diff=23345"/>
		<updated>2020-07-23T16:40:02Z</updated>

		<summary type="html">&lt;p&gt;Finist: initial version&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Warning|This article describes Virtuozzo/OpenVZ version 7.}}&lt;br /&gt;
&lt;br /&gt;
== Brief tech explanation ==&lt;br /&gt;
&lt;br /&gt;
Transcendent file cache (tcache) is a driver for [https://www.kernel.org/doc/html/v4.18/vm/cleancache.html cleancache], which stores reclaimed pages in memory unmodified.&amp;lt;br&amp;gt;&lt;br /&gt;
Its purpose it to adopt pages evicted from a memory cgroup on '''local''' pressure (inside a Container), so that they can be fetched back later without costly disk accesses.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Detailed user-level explanation ==&lt;br /&gt;
&lt;br /&gt;
Tcache is intended to increase the overall Hardware Node performance only&lt;br /&gt;
on '''undercommitted''' Nodes, i.e. where total sum of all Containers memory limit values placed on the Node&lt;br /&gt;
is less than Hardware Node RAM size.&lt;br /&gt;
&lt;br /&gt;
=== Example usecase description ===&lt;br /&gt;
You have a Node with 1Tb of RAM, you run 500 Containers on it limited by 1Gb of memory each (no swap for simplicity).&amp;lt;br&amp;gt;&lt;br /&gt;
Let's consider Container to be more or less identical, similar load, similar activity inside.&amp;lt;br&amp;gt;&lt;br /&gt;
=&amp;gt; normally those Containers should use 500Gb of physical RAM at max, and 500Gb will be just free on the Node.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You think it's simple situation - OK, the node is underloaded, let's put more Containers there, but that's not always true -&amp;lt;br&amp;gt;&lt;br /&gt;
it depends on what is the bottleneck on the Node, which depends on real workload of Containers running on the Node.&amp;lt;br&amp;gt;&lt;br /&gt;
But most often in '''real life''' - the '''disk''' becomes the '''bottleneck''' first, not the RAM, not the CPU.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Let's assume all those Containers run, say, cPanel, which by default collect some stats every, say,&amp;lt;br&amp;gt;&lt;br /&gt;
15 minutes - the stat collection process is run via crontab.&lt;br /&gt;
&lt;br /&gt;
{{Note| randomizing times of crontab jobs - is a good idea, but who usually does this for Containers?&lt;br /&gt;
We did it for application templates we shipped in Virtuozzo, but lot of software is just installed and configured inside Containers, we cannot do this.&amp;lt;br&amp;gt;&lt;br /&gt;
And often Hosting Providers are not allowed to touch data in Containers - so most often cron jobs are not randomized.}}&lt;br /&gt;
&lt;br /&gt;
OK, it does not matter how, but let's assume we get such a workload - every, say, 15 minutes (it's important that data access it quite rare),&amp;lt;br&amp;gt;&lt;br /&gt;
each Container accesses many small files, let it be&lt;br /&gt;
&lt;br /&gt;
* just 100 small files to gather stats and save it somewhere.&lt;br /&gt;
* In 500 Containers. Simultaneously.&lt;br /&gt;
* In parallel with other regular i/o workload.&lt;br /&gt;
* On HDDs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It's nightmare for disk subsystem, if an HDD provides 100 IOPS, it will take 50000/100/60 = 8.(3) minutes(!) to handle.&amp;lt;br&amp;gt;&lt;br /&gt;
OK, there could be RAID, let it is able to handle 300 IOPS, it results in 2.(7) minutes, and we forgot about other regular i/o,&amp;lt;br&amp;gt;&lt;br /&gt;
so it means every 15 minutes, the Node became almost unresponsive for several minutes until it handles all that random i/o generated by stats collection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
But why '''every''' 15 minutes? The file read is performed once and the file content resides in the Container pagecache!&amp;lt;br&amp;gt;&lt;br /&gt;
That's true, but here comes '''''15 minutes''''' period. The larger period - the worse.&amp;lt;br&amp;gt;&lt;br /&gt;
If a Container is active enough, it just reads more and more files - website data, pictures, video clips, files of a fileserver, etc.&amp;lt;br&amp;gt;&lt;br /&gt;
The thing is in 15 minutes it's quite possible a Container reads more than its RAM limit (remember - only 1Gb in our case!), and thus all old pagecache is dropped, substituted with the fresh one.&amp;lt;br&amp;gt;&lt;br /&gt;
And thus in 15 minutes it's quite possible you'll have to read all those 100 files in each Container from disk.&lt;br /&gt;
&lt;br /&gt;
=== tcache saves our lifes === &lt;br /&gt;
And here comes tcache to save us: let's don't completely drop pagecache which is reclaimed from a Container (on '''local''' reclaim),&amp;lt;br&amp;gt;&lt;br /&gt;
but save this pagecache in a special cache (tcache) on the Host in case there is free RAM on the Host.&lt;br /&gt;
&lt;br /&gt;
And in 15 minutes when all Containers start to access lot of small files again - those files data will be get back into Container pagecache without reading from physical disk -&amp;lt;br&amp;gt;&lt;br /&gt;
viola, tcache saves IOPS, no Node stuck anymore.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Q/A section ==&lt;br /&gt;
&lt;br /&gt;
 '''Q:''' can a Container be so active (i.e. read so much from disk) that this &amp;quot;useful&amp;quot; pagecache is dropped even from tcache?&lt;br /&gt;
 '''A:''' yes. But tcache extends the &amp;quot;safe&amp;quot; period.&lt;br /&gt;
&lt;br /&gt;
 '''Q:''' mainstream? LXC/Proxmox?&lt;br /&gt;
 '''A:''' No, it's Virtuozzo/OpenVZ specific.&lt;br /&gt;
    &amp;quot;cleancache&amp;quot; - the base for tcache it in mainstream, it's used for Xen.&lt;br /&gt;
    But we (VZ) wrote a driver for it and use it for Containers as well.&lt;br /&gt;
&lt;br /&gt;
 '''Q:''' I use SSD, not HDD, does tcache help me?&lt;br /&gt;
 '''A:''' SSD can provide much more IOPS, thus the Node's performance increase caused by tcache is less significant, but still reading from RAM (tcache is in RAM) is faster than reading from SSD.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Managing tcache ==&lt;br /&gt;
Tcache is enabled for all Containers on a Node by default.&lt;br /&gt;
&lt;br /&gt;
=== Boot option ===&lt;br /&gt;
To disable tcache at boot time: '''&amp;quot;tcache.enabled=0&amp;quot;''' kernel option.&lt;br /&gt;
&lt;br /&gt;
=== Global on-the-fly switch ===&lt;br /&gt;
tcache can be disabled/enabled using following commands:&lt;br /&gt;
&lt;br /&gt;
 echo 'N' &amp;gt; /sys/module/{tcache,tswap}/parameters/active&lt;br /&gt;
 echo 'Y' &amp;gt; /sys/module/{tcache,tswap}/parameters/active&lt;br /&gt;
&lt;br /&gt;
=== Per Container switch ===&lt;br /&gt;
To disable tcache on the fly per Container:&lt;br /&gt;
&lt;br /&gt;
 echo 1 &amp;gt; /sys/fs/cgroup/memory/machine.slice/$CTID/memory.disable_cleancache&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;br /&gt;
[[Category: Kernel]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Page_cache_isolation&amp;diff=23344</id>
		<title>Page cache isolation</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Page_cache_isolation&amp;diff=23344"/>
		<updated>2020-07-23T15:16:01Z</updated>

		<summary type="html">&lt;p&gt;Finist: note about legacy OpenVZ version described&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Warning|This article describes legacy version of OpenVZ (version 6).&lt;br /&gt;
This feature is absent in OpenVZ 7, similar feature in OpenVZ 7 is called '''tcache'''.}}&lt;br /&gt;
&lt;br /&gt;
This page describes a new '''strict page cache isolation''' feature, which appeared in the kernel 2.6.32-042stab068.8.&lt;br /&gt;
&lt;br /&gt;
The feature disables bouncing page cache pages between host and containers on physpages shortage in container. The internal reclaimer will drop cached data if container exceeded its physpages limit and pagecache isolation is turned on.&lt;br /&gt;
&lt;br /&gt;
Current state can be obtained by reading the &amp;lt;code&amp;gt;/proc/bc/&amp;lt;id&amp;gt;/debug:pagecache_isolation&amp;lt;/code&amp;gt; file. It is disabled by default.&lt;br /&gt;
&lt;br /&gt;
The following sysctls are available:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;ubc.pagecache_isolation&amp;lt;/code&amp;gt; = &amp;lt;code&amp;gt;0&amp;lt;/code&amp;gt;|&amp;lt;code&amp;gt;1&amp;lt;/code&amp;gt;&lt;br /&gt;
: To turn on or off isolation for all containers&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;ubc.pagecache_isolation_on&amp;lt;/code&amp;gt; = &amp;lt;&amp;lt;code&amp;gt;id&amp;lt;/code&amp;gt;&amp;gt;&lt;br /&gt;
: To turn on for container &amp;lt;id&amp;gt; (write only)&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;ubc.pagecache_isolation_off&amp;lt;/code&amp;gt; = &amp;lt;&amp;lt;code&amp;gt;id&amp;lt;/code&amp;gt;&amp;gt;&lt;br /&gt;
: To turn off for container &amp;lt;id&amp;gt; (write only)&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;br /&gt;
[[Category: Kernel]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=VSwap&amp;diff=23338</id>
		<title>VSwap</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=VSwap&amp;diff=23338"/>
		<updated>2020-07-17T15:57:27Z</updated>

		<summary type="html">&lt;p&gt;Finist: vswap in vz7 details added&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''New [[Download/kernel/rhel6|RHEL6-based OpenVZ]] kernel''' has a new memory management model, which supersedes [[UBC|User beancounters]]. It is called '''VSwap'''.&lt;br /&gt;
&lt;br /&gt;
== Primary parameters ==&lt;br /&gt;
&lt;br /&gt;
With VSwap, there are two required parameters: &amp;lt;code&amp;gt;ram&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;swap&amp;lt;/code&amp;gt; (a.k.a. &amp;lt;code&amp;gt;physpages&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;swappages&amp;lt;/code&amp;gt;). All the other beancounters become optional.&lt;br /&gt;
&lt;br /&gt;
* '''physpages'''&lt;br /&gt;
: This parameter sets the amount of fast physical memory (RAM) available to processes inside a container, in memory pages. Currently (as of 042stab042 kernel) the user memory, the kernel memory and the page cache are accounted into &amp;lt;code&amp;gt;physpages&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
: The &amp;lt;code&amp;gt;barrier&amp;lt;/code&amp;gt; is ignored and should be set to 0, and the &amp;lt;code&amp;gt;limit&amp;lt;/code&amp;gt; sets the limit.&lt;br /&gt;
&lt;br /&gt;
* '''ram'''&lt;br /&gt;
: is an easy shortcut for physpages.limit, and is measured in bytes&lt;br /&gt;
&lt;br /&gt;
* '''swappages'''&lt;br /&gt;
: This parameter sets the amount of &amp;quot;slower memory&amp;quot; (vswap) available to processes inside a container, in memory pages.&lt;br /&gt;
&lt;br /&gt;
: The &amp;lt;code&amp;gt;barrier&amp;lt;/code&amp;gt; is ignored and should be set to 0, and the &amp;lt;code&amp;gt;limit&amp;lt;/code&amp;gt; sets the limit.&lt;br /&gt;
&lt;br /&gt;
* '''swap'''&lt;br /&gt;
: is an easy shortcut for swappages.limit, and is measured in bytes&lt;br /&gt;
&lt;br /&gt;
The sum of &amp;lt;code&amp;gt;physpages.limit&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;swappages.limit&amp;lt;/code&amp;gt; limits the maximum amount&lt;br /&gt;
of memory which can be used by a container. When physpages limit&lt;br /&gt;
is reached, memory pages belonging to the container are pushed out to&lt;br /&gt;
so called virtual swap (''vswap''). The difference between normal swap&lt;br /&gt;
and vswap is that with vswap no actual disk I/O usually occurs. Instead,&lt;br /&gt;
a container is artificially slowed down, to emulate the effect of the real&lt;br /&gt;
swapping. Actual swap out occurs only if there is a global memory shortage&lt;br /&gt;
on the system.&lt;br /&gt;
&lt;br /&gt;
{{Note|swap used by a container can exceed &amp;lt;code&amp;gt;swappages.limit&amp;lt;/code&amp;gt;, but is always within sum of &amp;lt;code&amp;gt;physpages.limit&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;swappages.limit&amp;lt;/code&amp;gt;.}}&lt;br /&gt;
&lt;br /&gt;
=== Implicit UBC parameters ===&lt;br /&gt;
&lt;br /&gt;
Since vzctl 4.6, if some optional beancounters are not set, vzctl sets them implicitly,&lt;br /&gt;
based on '''ram''' and '''swap''' settings.&lt;br /&gt;
&lt;br /&gt;
The following formulae are used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;lockedpages_{bar} = oomguarpages_{bar} = ram&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;lockedpages_{lim} = oomguarpages_{lim} = \infty&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;vmguarpages_{bar} = vmguarpages_{lim} = ram + swap&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== VM overcommit and privvmpages ====&lt;br /&gt;
&lt;br /&gt;
vzctl 4.6 adds a new parameter, &amp;lt;code&amp;gt;--vm_overcommit&amp;lt;/code&amp;gt;.&lt;br /&gt;
Its only purpose is to be used in privvmpages calculation,&lt;br /&gt;
in case VSwap is used and there is no explicit setting&lt;br /&gt;
for privvmpages.&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;vm\_overcommit&amp;lt;/math&amp;gt; is set:&lt;br /&gt;
&lt;br /&gt;
: &amp;lt;math&amp;gt;privvmpages_{bar} = privvmpages_{lim} = (ram + swap) \times vm\_overcommit&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If it is not set:&lt;br /&gt;
&lt;br /&gt;
: &amp;lt;math&amp;gt;privvmpages_{bar} = privvmpages_{lim} = \infty&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Setting ==&lt;br /&gt;
&lt;br /&gt;
{{Note|for VSwap, you need vswap-enabled kernel, ie [[Download/kernel/rhel6|RHEL6-based OpenVZ]] kernel.}}&lt;br /&gt;
&lt;br /&gt;
Since vzctl 3.0.30, you can use &amp;lt;code&amp;gt;--ram&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--swap&amp;lt;/code&amp;gt; parameters, like this:&lt;br /&gt;
 &lt;br /&gt;
 vzctl set 777 --ram 512M --swap 1G --save&lt;br /&gt;
&lt;br /&gt;
== Convert non-VSwap CT to VSwap ==&lt;br /&gt;
&lt;br /&gt;
If you have an existing container with usual UBC parameters set, and you want to convert this one into VSwap enabled config, here's what you need to do.&lt;br /&gt;
&lt;br /&gt;
# Decide on how much RAM and swap you want this CT to have. Generally, sum of your new RAM+swap should be more or less equal to sum of old PRIVVMPAGES and KMEMSIZE.&lt;br /&gt;
# Manually remove all UBC parameters from config. '''This is optional''', you can still have UBC limits applied if you want.&lt;br /&gt;
# Add PHYSPAGES and SWAPPAGES parameters to config. Easiest way is to use &amp;lt;code&amp;gt;vzctl set $CTID --ram N --swap M --save&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now your config is vswap enabled, and when you (re)start it (or use &amp;lt;code&amp;gt;--reset_ub&amp;lt;/code&amp;gt;), vswap mechanism will be used by the kernel for this CT.&lt;br /&gt;
&lt;br /&gt;
Here is an example of the above steps:&lt;br /&gt;
&lt;br /&gt;
 CTID=123&lt;br /&gt;
 RAM=1G&lt;br /&gt;
 SWAP=2G&lt;br /&gt;
 CFG=/etc/vz/conf/${CTID}.conf&lt;br /&gt;
 cp $CFG $CFG.pre-vswap&lt;br /&gt;
 grep -Ev '^(KMEMSIZE|LOCKEDPAGES|PRIVVMPAGES|SHMPAGES|NUMPROC|PHYSPAGES|VMGUARPAGES|OOMGUARPAGES|NUMTCPSOCK|NUMFLOCK|NUMPTY|NUMSIGINFO|TCPSNDBUF|TCPRCVBUF|OTHERSOCKBUF|DGRAMRCVBUF|NUMOTHERSOCK|DCACHESIZE|NUMFILE|AVNUMPROC|NUMIPTENT|ORIGIN_SAMPLE|SWAPPAGES)=' &amp;gt; $CFG &amp;lt;  $CFG.pre-vswap&lt;br /&gt;
 vzctl set $CTID --ram $RAM --swap $SWAP --save&lt;br /&gt;
 vzctl set $CTID --reset_ub&lt;br /&gt;
&lt;br /&gt;
== How to distinguish between vswap and non-vswap configs? ==&lt;br /&gt;
&lt;br /&gt;
Both &amp;lt;code&amp;gt;vzctl&amp;lt;/code&amp;gt; and the kernel treats a configuration file as vswap one if PHYSPAGES limit is '''not''' set to &amp;lt;code&amp;gt;unlimited&amp;lt;/code&amp;gt; (a.k.a. [[LONG_MAX]]). You can also use the following command: &lt;br /&gt;
&lt;br /&gt;
 # vzlist -o vswap $CTID&lt;br /&gt;
&lt;br /&gt;
In addition, vzctl checks if kernel support vswap, and refuses to start a vswap-enabled container on a non vswap capable kernel. The check is presence of &amp;lt;code&amp;gt;/proc/vz/vswap&amp;lt;/code&amp;gt; file.&lt;br /&gt;
&lt;br /&gt;
== Details about vSwap implementation in Virtuozzo 7 ==&lt;br /&gt;
&lt;br /&gt;
The Container swapping process is similar to that on a standalone computer.&amp;lt;br&amp;gt;&lt;br /&gt;
This means in particular that some pages may get into the swap even if there is some free memory reported in the Container.&amp;lt;br&amp;gt;&lt;br /&gt;
This may validly happen in case kernel memory management system detects some anonymous memory which is not touched for a long time by Container processes and decides that it's more&lt;br /&gt;
efficient to put these anonymous pages into the swap and use more caches in a Container instead.&lt;br /&gt;
 &lt;br /&gt;
The Container swap space resides in physical node swap file.&amp;lt;br&amp;gt;&lt;br /&gt;
When the swap-out for a Container starts, appropriate number of pages are allocated in physical swap on the host. Next&lt;br /&gt;
# if there is no free memory on the host, the real swap-out of Container's memory to physical swap happens&lt;br /&gt;
# if there is free memory on the host, the Container's memory is saved in a special swap cache in host's RAM and no real write to host's physical swap occurs&lt;br /&gt;
 &lt;br /&gt;
{{Note|The physical swap space is allocated in both cases anyway, this guarantees all amount of host's swap cache for Containers memory can be written into physical swap on host in case the host gets short of RAM.}}&lt;br /&gt;
 &lt;br /&gt;
* '''Consequence 1''': without any configured node swap file the Container's `SWAPPAGES` parameter will be ignored.&lt;br /&gt;
 &lt;br /&gt;
* '''Consequence 2''': if node's swap size is less than sum of all Containers' swap sizes on the node, Containers won't be able to use 100% of their swap simultaneously - similar to RAM settings.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [http://openvz.livejournal.com/39644.html On vSwap and 042stab04x kernel improvements]&lt;br /&gt;
* [http://openvz.livejournal.com/39765.html Recent improvements in vzctl]&lt;br /&gt;
&lt;br /&gt;
[[Category: UBC]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=VPN_using_Wireguard&amp;diff=23125</id>
		<title>VPN using Wireguard</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=VPN_using_Wireguard&amp;diff=23125"/>
		<updated>2019-07-11T09:07:01Z</updated>

		<summary type="html">&lt;p&gt;Finist: Warning about WireGuard update procedure.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article describes how to use VPN via [https://www.wireguard.com WireGuard] inside a Virtuozzo 7 / OpenVZ 7 Container.&lt;br /&gt;
&lt;br /&gt;
{{warning|&amp;lt;br&amp;gt;&lt;br /&gt;
This article describes the WireGuard configuration in an OpenVZ Container which '''does not survive WireGuard package update'''.&amp;lt;br&amp;gt;&lt;br /&gt;
After WireGuard package update you have to repeat the described steps to make WireGuard working again.&amp;lt;br&amp;gt;&lt;br /&gt;
If you wish to have a persistent configuration which survives WireGuard updates, please contact [https://www.virtuozzo.com/support/virtuozzo-professional-services.html '''Virtuozzo Professional Services''']}} &lt;br /&gt;
&lt;br /&gt;
== Install WireGuard on the Host Node ==&lt;br /&gt;
=== Install vzkernel-devel package ===&lt;br /&gt;
Install vzkernel-devel package for the running kernel on the Host Node.&amp;lt;br&amp;gt;&lt;br /&gt;
It's required for building third-party kernel modules.&lt;br /&gt;
 # yum install -y vzkernel-devel&lt;br /&gt;
&lt;br /&gt;
=== Install WireGuard packages ===&lt;br /&gt;
Virtuozzo 7 is a derivative of RHEL7/CentOS7, so use corresponding part of [https://www.wireguard.com/install WireGuard installation].&lt;br /&gt;
&lt;br /&gt;
 # curl -Lo /etc/yum.repos.d/wireguard.repo https://copr.fedorainfracloud.org/coprs/jdoss/wireguard/repo/epel-7/jdoss-wireguard-epel-7.repo&lt;br /&gt;
 # yum install epel-release&lt;br /&gt;
 # yum install wireguard-dkms wireguard-tools&lt;br /&gt;
&lt;br /&gt;
== Allow WireGuard network interfaces inside a Container ==&lt;br /&gt;
Next, we need to patch wireguard kernel module to allow wireguard network interface to be created in Containers:&amp;lt;br&amp;gt;&lt;br /&gt;
(change the path to wireguard sources if needed)&lt;br /&gt;
 # patch /usr/src/wireguard-0.0.20190601/device.c diff-wireguard-allow-to-run-in-Containers&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
--- ./device.c.orig     2019-07-02 16:05:42.162373405 +0300&lt;br /&gt;
+++ ./device.c  2019-06-10 17:21:27.956413409 +0300&lt;br /&gt;
@@ -281,7 +281,7 @@ static void wg_setup(struct net_device *&lt;br /&gt;
 #else&lt;br /&gt;
        dev-&amp;gt;tx_queue_len = 0;&lt;br /&gt;
 #endif&lt;br /&gt;
-       dev-&amp;gt;features |= NETIF_F_LLTX;&lt;br /&gt;
+       dev-&amp;gt;features |= NETIF_F_LLTX | NETIF_F_VIRTUAL;&lt;br /&gt;
        dev-&amp;gt;features |= WG_NETDEV_FEATURES;&lt;br /&gt;
        dev-&amp;gt;hw_features |= WG_NETDEV_FEATURES;&lt;br /&gt;
        dev-&amp;gt;hw_enc_features |= WG_NETDEV_FEATURES;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Note|Why it's required?&lt;br /&gt;
As Virtuozzo is very keen on security and stability, we don't allow creation of any unverified network interface inside Containers.&amp;lt;br&amp;gt;&lt;br /&gt;
Only those which are safe (verified and considered properly virtualized) are allowed.}}&lt;br /&gt;
&lt;br /&gt;
== Rebuild patched wireguard kernel module ==&lt;br /&gt;
Now need to rebuild patched wireguard kernel module.&amp;lt;br&amp;gt;&lt;br /&gt;
dkms does not have a command to rebuild a module, so have to remove/add the module.&lt;br /&gt;
&lt;br /&gt;
 # dkms remove -m wireguard -v 0.0.20190601 --all&lt;br /&gt;
 # dkms add -m wireguard -v 0.0.20190601&lt;br /&gt;
 # dkms build -m wireguard -v 0.0.20190601&lt;br /&gt;
 # dkms install -m wireguard -v 0.0.20190601&lt;br /&gt;
&lt;br /&gt;
== Load the wireguard kernel module ==&lt;br /&gt;
Now load the wireguard kernel module on the Host Node,&amp;lt;br&amp;gt;&lt;br /&gt;
it won't be automatically loaded upon request from inside a Container.&lt;br /&gt;
 # modprobe wireguard&lt;br /&gt;
&lt;br /&gt;
== Create a Container ==&lt;br /&gt;
Create a Container with veth network (venet won't work here).&lt;br /&gt;
&lt;br /&gt;
 # vzctl create 200 --ostemplate centos7-x86_64&lt;br /&gt;
 # prlctl set 200 --device-add net --network Bridged --dhcp yes&lt;br /&gt;
 # vzctl start 200&lt;br /&gt;
 # vzctl enter 200&lt;br /&gt;
 // The Container should have an IP assigned now&lt;br /&gt;
&lt;br /&gt;
== Install WireGuard inside the Container ==&lt;br /&gt;
Same procedure like wireguard is installed on the Host:&lt;br /&gt;
&lt;br /&gt;
 [CT]# curl -Lo /etc/yum.repos.d/wireguard.repo https://copr.fedorainfracloud.org/coprs/jdoss/wireguard/repo/epel-7/jdoss-wireguard-epel-7.repo&lt;br /&gt;
 [CT]# yum install epel-release&lt;br /&gt;
 [CT]# yum install wireguard-dkms wireguard-tools&lt;br /&gt;
 // may be its enough to install &amp;quot;wireguard-tools&amp;quot; package only, did not check&lt;br /&gt;
&lt;br /&gt;
Now configure wireguard inside the Container using instructions from [https://www.wireguard.com/quickstart WireGuard quickstart]&lt;br /&gt;
&lt;br /&gt;
== Allow WireGuard port(s) in firewall ==&lt;br /&gt;
Don't forget to open UDP port for wireguard on each end Node/Container.&amp;lt;br&amp;gt;&lt;br /&gt;
Wireguard supports UDP only at the moment.&amp;lt;br&amp;gt;&lt;br /&gt;
The port number can be checked via:&lt;br /&gt;
 [CT]# wg | grep listening&lt;br /&gt;
   listening port: 35849&lt;br /&gt;
&lt;br /&gt;
 [CT]# firewall-cmd --permanent --zone=public --add-port=35849/udp     &lt;br /&gt;
 success&lt;br /&gt;
 [CT]# firewall-cmd --reload&lt;br /&gt;
&lt;br /&gt;
Do the same on another Node/Container and voila!&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;br /&gt;
[[Category: Networking]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=VPN_using_Wireguard&amp;diff=23123</id>
		<title>VPN using Wireguard</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=VPN_using_Wireguard&amp;diff=23123"/>
		<updated>2019-07-04T12:26:30Z</updated>

		<summary type="html">&lt;p&gt;Finist: insert patch content right into the page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article describes how to use VPN via [https://www.wireguard.com WireGuard] inside a Virtuozzo 7 / OpenVZ 7 Container.&lt;br /&gt;
&lt;br /&gt;
== Install WireGuard on the Host Node ==&lt;br /&gt;
=== Install vzkernel-devel package ===&lt;br /&gt;
Install vzkernel-devel package for the running kernel on the Host Node.&amp;lt;br&amp;gt;&lt;br /&gt;
It's required for building third-party kernel modules.&lt;br /&gt;
 # yum install -y vzkernel-devel&lt;br /&gt;
&lt;br /&gt;
=== Install WireGuard packages ===&lt;br /&gt;
Virtuozzo 7 is a derivative of RHEL7/CentOS7, so use corresponding part of [https://www.wireguard.com/install WireGuard installation].&lt;br /&gt;
&lt;br /&gt;
 # curl -Lo /etc/yum.repos.d/wireguard.repo https://copr.fedorainfracloud.org/coprs/jdoss/wireguard/repo/epel-7/jdoss-wireguard-epel-7.repo&lt;br /&gt;
 # yum install epel-release&lt;br /&gt;
 # yum install wireguard-dkms wireguard-tools&lt;br /&gt;
&lt;br /&gt;
== Allow WireGuard network interfaces inside a Container ==&lt;br /&gt;
Next, we need to patch wireguard kernel module to allow wireguard network interface to be created in Containers:&amp;lt;br&amp;gt;&lt;br /&gt;
(change the path to wireguard sources if needed)&lt;br /&gt;
 # patch /usr/src/wireguard-0.0.20190601/device.c diff-wireguard-allow-to-run-in-Containers&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
--- ./device.c.orig     2019-07-02 16:05:42.162373405 +0300&lt;br /&gt;
+++ ./device.c  2019-06-10 17:21:27.956413409 +0300&lt;br /&gt;
@@ -281,7 +281,7 @@ static void wg_setup(struct net_device *&lt;br /&gt;
 #else&lt;br /&gt;
        dev-&amp;gt;tx_queue_len = 0;&lt;br /&gt;
 #endif&lt;br /&gt;
-       dev-&amp;gt;features |= NETIF_F_LLTX;&lt;br /&gt;
+       dev-&amp;gt;features |= NETIF_F_LLTX | NETIF_F_VIRTUAL;&lt;br /&gt;
        dev-&amp;gt;features |= WG_NETDEV_FEATURES;&lt;br /&gt;
        dev-&amp;gt;hw_features |= WG_NETDEV_FEATURES;&lt;br /&gt;
        dev-&amp;gt;hw_enc_features |= WG_NETDEV_FEATURES;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Note|Why it's required?&lt;br /&gt;
As Virtuozzo is very keen on security and stability, we don't allow creation of any unverified network interface inside Containers.&amp;lt;br&amp;gt;&lt;br /&gt;
Only those which are safe (verified and considered properly virtualized) are allowed.}}&lt;br /&gt;
&lt;br /&gt;
== Rebuild patched wireguard kernel module ==&lt;br /&gt;
Now need to rebuild patched wireguard kernel module.&amp;lt;br&amp;gt;&lt;br /&gt;
dkms does not have a command to rebuild a module, so have to remove/add the module.&lt;br /&gt;
&lt;br /&gt;
 # dkms remove -m wireguard -v 0.0.20190601 --all&lt;br /&gt;
 # dkms add -m wireguard -v 0.0.20190601&lt;br /&gt;
 # dkms build -m wireguard -v 0.0.20190601&lt;br /&gt;
 # dkms install -m wireguard -v 0.0.20190601&lt;br /&gt;
&lt;br /&gt;
== Load the wireguard kernel module ==&lt;br /&gt;
Now load the wireguard kernel module on the Host Node,&amp;lt;br&amp;gt;&lt;br /&gt;
it won't be automatically loaded upon request from inside a Container.&lt;br /&gt;
 # modprobe wireguard&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Create a Container ==&lt;br /&gt;
Create a Container with veth network (venet won't work here).&lt;br /&gt;
&lt;br /&gt;
 # vzctl create 200 --ostemplate centos7-x86_64&lt;br /&gt;
 # prlctl set 200 --device-add net --network Bridged --dhcp yes&lt;br /&gt;
 # vzctl start 200&lt;br /&gt;
 # vzctl enter 200&lt;br /&gt;
 // The Container should have an IP assigned now&lt;br /&gt;
&lt;br /&gt;
== Install WireGuard inside the Container ==&lt;br /&gt;
Same procedure like wireguard is installed on the Host:&lt;br /&gt;
&lt;br /&gt;
 [CT]# curl -Lo /etc/yum.repos.d/wireguard.repo https://copr.fedorainfracloud.org/coprs/jdoss/wireguard/repo/epel-7/jdoss-wireguard-epel-7.repo&lt;br /&gt;
 [CT]# yum install epel-release&lt;br /&gt;
 [CT]# yum install wireguard-dkms wireguard-tools&lt;br /&gt;
 // may be its enough to install &amp;quot;wireguard-tools&amp;quot; package only, did not check&lt;br /&gt;
&lt;br /&gt;
Now configure wireguard inside the Container using instructions from [https://www.wireguard.com/quickstart WireGuard quickstart]&lt;br /&gt;
&lt;br /&gt;
== Allow WireGuard port(s) in firewall ==&lt;br /&gt;
Don't forget to open UDP port for wireguard on each end Node/Container.&amp;lt;br&amp;gt;&lt;br /&gt;
Wireguard supports UDP only at the moment.&amp;lt;br&amp;gt;&lt;br /&gt;
The port number can be checked via:&lt;br /&gt;
 [CT]# wg | grep listening&lt;br /&gt;
   listening port: 35849&lt;br /&gt;
&lt;br /&gt;
 [CT]# firewall-cmd --permanent --zone=public --add-port=35849/udp     &lt;br /&gt;
 success&lt;br /&gt;
 [CT]# firewall-cmd --reload&lt;br /&gt;
&lt;br /&gt;
Do the same on another Node/Container and voila!&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;br /&gt;
[[Category: Networking]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=VPN_using_Wireguard&amp;diff=23122</id>
		<title>VPN using Wireguard</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=VPN_using_Wireguard&amp;diff=23122"/>
		<updated>2019-07-04T12:13:01Z</updated>

		<summary type="html">&lt;p&gt;Finist: initial commit&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article describes how to use VPN via [https://www.wireguard.com WireGuard] inside a Virtuozzo 7 / OpenVZ 7 Container.&lt;br /&gt;
&lt;br /&gt;
== Install WireGuard on the Host Node ==&lt;br /&gt;
=== Install vzkernel-devel package ===&lt;br /&gt;
Install vzkernel-devel package for the running kernel on the Host Node.&amp;lt;br&amp;gt;&lt;br /&gt;
It's required for building third-party kernel modules.&lt;br /&gt;
 # yum install -y vzkernel-devel&lt;br /&gt;
&lt;br /&gt;
=== Install WireGuard packages ===&lt;br /&gt;
Virtuozzo 7 is a derivative of RHEL7/CentOS7, so use corresponding part of [https://www.wireguard.com/install WireGuard installation].&lt;br /&gt;
&lt;br /&gt;
 # curl -Lo /etc/yum.repos.d/wireguard.repo https://copr.fedorainfracloud.org/coprs/jdoss/wireguard/repo/epel-7/jdoss-wireguard-epel-7.repo&lt;br /&gt;
 # yum install epel-release&lt;br /&gt;
 # yum install wireguard-dkms wireguard-tools&lt;br /&gt;
&lt;br /&gt;
== Allow WireGuard network interfaces inside a Container ==&lt;br /&gt;
Next, we need to patch wireguard kernel module to allow wireguard network interface to be created in Containers:&amp;lt;br&amp;gt;&lt;br /&gt;
(change the path to wireguard sources if needed)&lt;br /&gt;
 # patch /usr/src/wireguard-0.0.20190601/device.c diff-wireguard-allow-to-run-in-Containers&lt;br /&gt;
&lt;br /&gt;
{{Note|Why it's required?&lt;br /&gt;
As Virtuozzo is very keen on security and stability, we don't allow creation of any unverified network interface inside Containers.&amp;lt;br&amp;gt;&lt;br /&gt;
Only those which are safe (verified and considered properly virtualized) are allowed.}}&lt;br /&gt;
&lt;br /&gt;
== Rebuild patched wireguard kernel module ==&lt;br /&gt;
Now need to rebuild patched wireguard kernel module.&amp;lt;br&amp;gt;&lt;br /&gt;
dkms does not have a command to rebuild a module, so have to remove/add the module.&lt;br /&gt;
&lt;br /&gt;
 # dkms remove -m wireguard -v 0.0.20190601 --all&lt;br /&gt;
 # dkms add -m wireguard -v 0.0.20190601&lt;br /&gt;
 # dkms build -m wireguard -v 0.0.20190601&lt;br /&gt;
 # dkms install -m wireguard -v 0.0.20190601&lt;br /&gt;
&lt;br /&gt;
== Load the wireguard kernel module ==&lt;br /&gt;
Now load the wireguard kernel module on the Host Node,&amp;lt;br&amp;gt;&lt;br /&gt;
it won't be automatically loaded upon request from inside a Container.&lt;br /&gt;
 # modprobe wireguard&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Create a Container ==&lt;br /&gt;
Create a Container with veth network (venet won't work here).&lt;br /&gt;
&lt;br /&gt;
 # vzctl create 200 --ostemplate centos7-x86_64&lt;br /&gt;
 # prlctl set 200 --device-add net --network Bridged --dhcp yes&lt;br /&gt;
 # vzctl start 200&lt;br /&gt;
 # vzctl enter 200&lt;br /&gt;
 // The Container should have an IP assigned now&lt;br /&gt;
&lt;br /&gt;
== Install WireGuard inside the Container ==&lt;br /&gt;
Same procedure like wireguard is installed on the Host:&lt;br /&gt;
&lt;br /&gt;
 [CT]# curl -Lo /etc/yum.repos.d/wireguard.repo https://copr.fedorainfracloud.org/coprs/jdoss/wireguard/repo/epel-7/jdoss-wireguard-epel-7.repo&lt;br /&gt;
 [CT]# yum install epel-release&lt;br /&gt;
 [CT]# yum install wireguard-dkms wireguard-tools&lt;br /&gt;
 // may be its enough to install &amp;quot;wireguard-tools&amp;quot; package only, did not check&lt;br /&gt;
&lt;br /&gt;
Now configure wireguard inside the Container using instructions from [https://www.wireguard.com/quickstart WireGuard quickstart]&lt;br /&gt;
&lt;br /&gt;
== Allow WireGuard port(s) in firewall ==&lt;br /&gt;
Don't forget to open UDP port for wireguard on each end Node/Container.&amp;lt;br&amp;gt;&lt;br /&gt;
Wireguard supports UDP only at the moment.&amp;lt;br&amp;gt;&lt;br /&gt;
The port number can be checked via:&lt;br /&gt;
 [CT]# wg | grep listening&lt;br /&gt;
   listening port: 35849&lt;br /&gt;
&lt;br /&gt;
 [CT]# firewall-cmd --permanent --zone=public --add-port=35849/udp     &lt;br /&gt;
 success&lt;br /&gt;
 [CT]# firewall-cmd --reload&lt;br /&gt;
&lt;br /&gt;
Do the same on another Node/Container and voila!&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;br /&gt;
[[Category: Networking]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Docker_inside_CT&amp;diff=22636</id>
		<title>Docker inside CT</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Docker_inside_CT&amp;diff=22636"/>
		<updated>2017-05-22T06:55:30Z</updated>

		<summary type="html">&lt;p&gt;Finist: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Since OpenVZ kernel [[Download/kernel/rhel6-testing/042stab105.4|042stab105.4]] it is possible to run Docker inside containers. This article describes how.&lt;br /&gt;
&amp;lt;br&amp;gt;'''This page is applicable for OpenVZ 6''' (for Virtuozzo 7 see [[Docker inside CT vz7| '''here''']]).&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
* Kernel 042stab105.4 or later version&lt;br /&gt;
* Kernel modules '''tun''', '''veth''' and '''bridge''' loaded on host (not required since vzctl 4.9 as it loads it automatically)&lt;br /&gt;
&lt;br /&gt;
== Container creation and tuning ==&lt;br /&gt;
&lt;br /&gt;
* Create CentOS 7 container with enough disk space:&lt;br /&gt;
 vzctl create $veid --ostemplate centos-7-x86_64 --diskspace 20G&lt;br /&gt;
* Turn on bridge feature to allow docker creating bridged network:&lt;br /&gt;
 vzctl set $veid --features bridge:on --save&lt;br /&gt;
* Setup Container veth-based network:&lt;br /&gt;
 vzctl set $veid --netif_add eth0 --save&lt;br /&gt;
* Allow all iptables modules to be used in containers:&lt;br /&gt;
 vzctl set $veid --netfilter full --save&lt;br /&gt;
* Enable tun device access for container:&lt;br /&gt;
 vzctl set $veid --devnodes net/tun:rw --save&lt;br /&gt;
* Configure custom cgroups in systemd:&lt;br /&gt;
: &amp;lt;small&amp;gt;''systemd reads /proc/cgroups and mounts all cgroups enabled there, though it doesn't know there's a restriction that only freezer,devices and cpuacct,cpu,cpuset can be mounted in container, but not freezer, cpu etc. separately''&amp;lt;/small&amp;gt;&lt;br /&gt;
 vzctl mount $veid&lt;br /&gt;
 echo &amp;quot;JoinControllers=cpu,cpuacct,cpuset freezer,devices&amp;quot; &amp;gt;&amp;gt; /vz/root/$veid/etc/systemd/system.conf&lt;br /&gt;
* Start the container:&lt;br /&gt;
 vzctl start $veid&lt;br /&gt;
* If you use Debian Wheezy for your CT which does not support systemd, you can run:&lt;br /&gt;
 mount -t tmpfs tmpfs /sys/fs/cgroup&lt;br /&gt;
 mkdir /sys/fs/cgroup/freezer,devices&lt;br /&gt;
 mount -t cgroup cgroup /sys/fs/cgroup/freezer,devices -o freezer,devices&lt;br /&gt;
 mkdir /sys/fs/cgroup/cpu,cpuacct,cpuset&lt;br /&gt;
 mount -t cgroup cgroup /sys/fs/cgroup/cpu,cpuacct,cpuset/ -o cpu,cpuacct,cpuset&lt;br /&gt;
&lt;br /&gt;
== Prepare Docker in container == &lt;br /&gt;
&lt;br /&gt;
These steps are to be performed inside the container.&lt;br /&gt;
&lt;br /&gt;
* Install Docker:&lt;br /&gt;
 yum -y install docker-io&lt;br /&gt;
* Start docker daemon&lt;br /&gt;
 dockerd -s vfs&lt;br /&gt;
or change line in /etc/sysconfig/docker to:&lt;br /&gt;
 OPTIONS='--selinux-enabled -s vfs'&lt;br /&gt;
and&lt;br /&gt;
 service docker start&lt;br /&gt;
&lt;br /&gt;
== Example usage ==&lt;br /&gt;
&lt;br /&gt;
=== Wordpress ===&lt;br /&gt;
&lt;br /&gt;
Use Docker to start Wordpress (official, standard way).&lt;br /&gt;
&lt;br /&gt;
* Start mysql docker:&lt;br /&gt;
 docker run --name test-mysql -e MYSQL_ROOT_PASSWORD=123 -d mysql&lt;br /&gt;
* Start wordpress:&lt;br /&gt;
 docker run --name test-wordpress --link test-mysql:mysql -p 8080:80 -d wordpress&lt;br /&gt;
* Access wordpress server by container IP and port 8080: &amp;lt;pre&amp;gt;&amp;lt;nowiki&amp;gt;http://container_ip:8080&amp;lt;/nowiki&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Limitations ==&lt;br /&gt;
&lt;br /&gt;
* Only &amp;quot;vfs&amp;quot; Docker graph driver is currently supported&lt;br /&gt;
* [[Checkpointing and live migration]] of a container with Docker containers inside is not supported&lt;br /&gt;
* Bridges cannot be created inside Docker containers running inside OpenVZ container&lt;br /&gt;
* Only works with docker versions 1.10 or older. Newer versions will return an error: &amp;quot;Your Linux kernel version 2.6.32-042stab123.2 is not supported for running docker. Please upgrade your kernel to 3.10.0 or newer.&amp;quot; (i.e. switch to [[Quick_installation|Virtuozzo 7]] or later)&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [http://www.youtube.com/watch?v=rh4oPpLtdYc Docker inside CT demo video].&lt;br /&gt;
&lt;br /&gt;
[[Category:HOWTO]]&lt;br /&gt;
[[Category: TRD]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Quick_installation&amp;diff=22635</id>
		<title>Quick installation</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Quick_installation&amp;diff=22635"/>
		<updated>2017-05-22T06:52:51Z</updated>

		<summary type="html">&lt;p&gt;Finist: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Virtuozzo}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;translate&amp;gt;&lt;br /&gt;
&amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
{{Note|See [[Quick installation (legacy)]] if you are looking to install the legacy version of OpenVZ.}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:2--&amp;gt;&lt;br /&gt;
This document briefly describes the steps needed to install Virtuozzo 7 on your machine.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:3--&amp;gt;&lt;br /&gt;
There are a few ways to install Virtuozzo:&lt;br /&gt;
&lt;br /&gt;
== Bare-metal installation == &amp;lt;!--T:4--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:5--&amp;gt;&lt;br /&gt;
OpenVZ project builds its own Linux distribution with both hypervisor and container virtualization.&lt;br /&gt;
It is based on our own Linux distribution, with the additions of [[Download/kernel/rhel7-testing|our custom kernel]], OpenVZ management utilities, [[QEMU]] and Virtuozzo installer. It is highly recommended to use OpenVZ containers and virtual machines with this Virtuozzo installation image. See [[Virtuozzo]].&lt;br /&gt;
[https://download.openvz.org/virtuozzo/releases/7.0/x86_64/iso/ Download] installation ISO image.&lt;br /&gt;
&lt;br /&gt;
== Using Virtuozzo in the Vagrant box == &amp;lt;!--T:6--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:7--&amp;gt;&lt;br /&gt;
[https://www.vagrantup.com/ Vagrant] is a tool for creating reproducible and portable development environments.&lt;br /&gt;
It is easy to run environment with Virtuozzo using Vagrant:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:8--&amp;gt;&lt;br /&gt;
* Download and [https://docs.vagrantup.com/v2/installation/ install Vagrant]&lt;br /&gt;
* Download and install [https://www.virtualbox.org/wiki/Downloads Virtualbox], Parallels Desktop, VMware Fusion or VMware Workstation. Please note that you need to enable nested virtualization support in your hypervisor to run virtual machines on Virtuozzo 7. VirtualBox does not officially support nested virtualization now.&lt;br /&gt;
* Download [https://atlas.hashicorp.com/OpenVZ/boxes/Virtuozzo-7.0 Virtuozzo box]:&lt;br /&gt;
&lt;br /&gt;
   &amp;lt;!--T:9--&amp;gt;&lt;br /&gt;
$ vagrant init OpenVZ/Virtuozzo-7.0&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:10--&amp;gt;&lt;br /&gt;
* Run box:&lt;br /&gt;
&lt;br /&gt;
   &amp;lt;!--T:11--&amp;gt;&lt;br /&gt;
$ vagrant up --provider virtualbox&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:12--&amp;gt;&lt;br /&gt;
and in case of VMware hypervisor:&lt;br /&gt;
&lt;br /&gt;
   &amp;lt;!--T:13--&amp;gt;&lt;br /&gt;
$ vagrant up --provider vmware_desktop&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:14--&amp;gt;&lt;br /&gt;
and in case of Parallels hypervisor:&lt;br /&gt;
&lt;br /&gt;
   &amp;lt;!--T:15--&amp;gt;&lt;br /&gt;
$ vagrant up --provider parallels&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:16--&amp;gt;&lt;br /&gt;
* Attach to console:&lt;br /&gt;
&lt;br /&gt;
   &amp;lt;!--T:17--&amp;gt;&lt;br /&gt;
$ vagrant ssh&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:18--&amp;gt;&lt;br /&gt;
* Use ''vagrant/vagrant'' to login inside box&lt;br /&gt;
&lt;br /&gt;
== Using Virtuozzo in the Amazon EC2 == &amp;lt;!--T:19--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:20--&amp;gt;&lt;br /&gt;
Follow steps in [[Using Virtuozzo in the Amazon EC2]].&lt;br /&gt;
&lt;br /&gt;
== Using Virtuozzo == &amp;lt;!--T:34--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:35--&amp;gt;&lt;br /&gt;
Page with [[screencasts]] shows demo with a few Virtuozzo commands. Feel free to add more.&lt;br /&gt;
&lt;br /&gt;
== See also == &amp;lt;!--T:36--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:37--&amp;gt;&lt;br /&gt;
* [https://docs.openvz.org/ Official Virtuozzo documentation]&lt;br /&gt;
&amp;lt;/translate&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: Installation]]&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=IO_statistics&amp;diff=22628</id>
		<title>IO statistics</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=IO_statistics&amp;diff=22628"/>
		<updated>2017-05-12T10:52:21Z</updated>

		<summary type="html">&lt;p&gt;Finist: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the IO statistics that is collected at the IO-scheduler level. It describes the information about the container's real work with disks. This is different from what shown by [[IO accounting]].&lt;br /&gt;
&lt;br /&gt;
== Kernel interface  ==&lt;br /&gt;
&lt;br /&gt;
The stats are reported via the proc files. Currently it is available in kernels starting from 028stab069.1.&lt;br /&gt;
&lt;br /&gt;
=== Files ===&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/proc/bc/$id/iostat&amp;lt;/code&amp;gt;&lt;br /&gt;
: statistics for beancounter $id&lt;br /&gt;
* &amp;lt;code&amp;gt;/proc/bc/iostat&amp;lt;/code&amp;gt; &lt;br /&gt;
: statistics for all beancounters&lt;br /&gt;
&lt;br /&gt;
=== Format ===&lt;br /&gt;
&lt;br /&gt;
Files contains one row for each disk-beancounter pair.&lt;br /&gt;
&lt;br /&gt;
Columns are:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! N	!! name	!! type	!! description&lt;br /&gt;
|-&lt;br /&gt;
| 1	|| disk	|| string	|| Disk device name, e.g. sda or hda, or a special queue (like fuse or flush)&lt;br /&gt;
|-&lt;br /&gt;
| 2	|| ub id || integer	|| Beancounter id&lt;br /&gt;
|-&lt;br /&gt;
| 3	|| state || char	|| currently unused (always '.')&lt;br /&gt;
|-&lt;br /&gt;
| 4	|| busy queues	|| integer	|| The number of queues with requests (see below)&lt;br /&gt;
|-&lt;br /&gt;
| 5	|| on dispatch	|| integer	|| currently unused (always '0')&lt;br /&gt;
|-&lt;br /&gt;
| 6	|| activations count	|| integer	|| currently unused (always '0')&lt;br /&gt;
|-&lt;br /&gt;
| 7	|| wait time	 || integer	 || Total time in waiting state in milliseconds&lt;br /&gt;
|-&lt;br /&gt;
| 8	|| used time	 || integer	 || Total time in active state in milliseconds.&lt;br /&gt;
|-&lt;br /&gt;
| 9	|| requests completed	|| integer	|| The number of completed requests&lt;br /&gt;
|-&lt;br /&gt;
| 10	|| sectors transferred	 || integer	|| The number of 512 sectors transferred (includes both read and write)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
New columns might be added at the end of row in future.&lt;br /&gt;
&lt;br /&gt;
Separate stats exist for fuse and flush, that only report requests and sectors stats (others are always 0).&lt;br /&gt;
&lt;br /&gt;
Example of parsing code: parse_proc_iostat() function in [https://src.openvz.org/projects/OVZ/repos/vzstat/browse/src/vzstat.c vzstat.c]&lt;br /&gt;
&lt;br /&gt;
=== I/O schedulers ===&lt;br /&gt;
Check available/active I/O schedulers for block device &amp;quot;sda&amp;quot;:&lt;br /&gt;
 # cat /sys/block/sda/queue/scheduler&lt;br /&gt;
 noop deadline [cfq]&lt;br /&gt;
&lt;br /&gt;
* for &amp;quot;cfq&amp;quot; I/O scheduler: a separate block device line is added in iostat proc file&lt;br /&gt;
 # cat /proc/bc/100/iostat&lt;br /&gt;
 flush 100 . 0 0 0 0 0 7389 1893968 0 0&lt;br /&gt;
 fuse 100 . 0 0 0 0 0 0 0 0 0&lt;br /&gt;
 sda 100 . 0 0 0 9000 1843380 245216 55845488 245028 188&lt;br /&gt;
&lt;br /&gt;
* for &amp;quot;deadline&amp;quot; I/O scheduler: no additional per-device line is added, iops counters for such devices are added to &amp;quot;flush&amp;quot; line counters (iops limit works)&lt;br /&gt;
&lt;br /&gt;
* for &amp;quot;noop&amp;quot; I/O scheduler: iops are not counted (iops limit does not work)&lt;br /&gt;
&lt;br /&gt;
* for devices with no I/O scheduler (like logical devices, ceph rbd devices, etc): iops are not counted (iops limit does not work)&lt;br /&gt;
 # cat /sys/block/dm-0/queue/scheduler&lt;br /&gt;
 none&lt;br /&gt;
&lt;br /&gt;
 # cat /sys/block/rbd0/queue/scheduler&lt;br /&gt;
 none&lt;br /&gt;
&lt;br /&gt;
=== Queues ===&lt;br /&gt;
&lt;br /&gt;
Each beancounter may have many queues with requests. Typically there's one queue for each task with synchronous (e.g. reads) requests and and the fixed amount of them for asynchronous requests (e.g. cached writes) for each beancounter.&lt;br /&gt;
&lt;br /&gt;
== Interpretation ==&lt;br /&gt;
&lt;br /&gt;
=== Disk usage times ===&lt;br /&gt;
&lt;br /&gt;
The disk usage should be reported in a top-like style. Consider the following code&lt;br /&gt;
&lt;br /&gt;
 read_iostat(&amp;amp;a);&lt;br /&gt;
 sleep(interval);&lt;br /&gt;
 read_iostat(&amp;amp;b);&lt;br /&gt;
&lt;br /&gt;
Now the following numbers should be calculated and shown.&lt;br /&gt;
&lt;br /&gt;
 active  = sum(b.used_time - a.used_time) * 100 / interval;&lt;br /&gt;
 waiting = sum(b.wait_time - a.wait_time) * 100 / interval;&lt;br /&gt;
 idle    = 100 - (active + waiting);&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;sum&amp;lt;/code&amp;gt; function sums up the times for all disk for the beancounter.&lt;br /&gt;
&lt;br /&gt;
Additionally two more values should be shown for beancounter.&lt;br /&gt;
&lt;br /&gt;
=== IO speed ===&lt;br /&gt;
&lt;br /&gt;
The value&lt;br /&gt;
&lt;br /&gt;
 sum(b.transfered_sectors - a.transfered_sectors) * 512 / interval&lt;br /&gt;
&lt;br /&gt;
denotes the speed of the IO performed by the beancounter.&lt;br /&gt;
&lt;br /&gt;
=== Average request size ===&lt;br /&gt;
&lt;br /&gt;
The value&lt;br /&gt;
&lt;br /&gt;
 (b.transfered_sectors - a.transfered_sectors)/(b.requests_completed - a.requests_completed)&lt;br /&gt;
&lt;br /&gt;
denotes the average request size for a beancounter to a particular disk.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[IO accounting]]&lt;br /&gt;
* [[I/O priorities]]&lt;br /&gt;
* [[I/O limits]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=IO_statistics&amp;diff=22627</id>
		<title>IO statistics</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=IO_statistics&amp;diff=22627"/>
		<updated>2017-05-12T10:32:31Z</updated>

		<summary type="html">&lt;p&gt;Finist: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the IO statistics that is collected at the IO-scheduler level. It describes the information about the container's real work with disks. This is different from what shown by [[IO accounting]].&lt;br /&gt;
&lt;br /&gt;
== Kernel interface  ==&lt;br /&gt;
&lt;br /&gt;
The stats are reported via the proc files. Currently it is available in kernels starting from 028stab069.1.&lt;br /&gt;
&lt;br /&gt;
=== Files ===&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/proc/bc/$id/iostat&amp;lt;/code&amp;gt;&lt;br /&gt;
: statistics for beancounter $id&lt;br /&gt;
* &amp;lt;code&amp;gt;/proc/bc/iostat&amp;lt;/code&amp;gt; &lt;br /&gt;
: statistics for all beancounters&lt;br /&gt;
&lt;br /&gt;
=== Format ===&lt;br /&gt;
&lt;br /&gt;
Files contains one row for each disk-beancounter pair.&lt;br /&gt;
&lt;br /&gt;
Columns are:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! N	!! name	!! type	!! description&lt;br /&gt;
|-&lt;br /&gt;
| 1	|| disk	|| string	|| Disk device name, e.g. sda or hda, or a special queue (like fuse or flush)&lt;br /&gt;
|-&lt;br /&gt;
| 2	|| ub id || integer	|| Beancounter id&lt;br /&gt;
|-&lt;br /&gt;
| 3	|| state || char	|| currently unused (always '.')&lt;br /&gt;
|-&lt;br /&gt;
| 4	|| busy queues	|| integer	|| The number of queues with requests (see below)&lt;br /&gt;
|-&lt;br /&gt;
| 5	|| on dispatch	|| integer	|| currently unused (always '0')&lt;br /&gt;
|-&lt;br /&gt;
| 6	|| activations count	|| integer	|| currently unused (always '0')&lt;br /&gt;
|-&lt;br /&gt;
| 7	|| wait time	 || integer	 || Total time in waiting state in milliseconds&lt;br /&gt;
|-&lt;br /&gt;
| 8	|| used time	 || integer	 || Total time in active state in milliseconds.&lt;br /&gt;
|-&lt;br /&gt;
| 9	|| requests completed	|| integer	|| The number of completed requests&lt;br /&gt;
|-&lt;br /&gt;
| 10	|| sectors transferred	 || integer	|| The number of 512 sectors transferred (includes both read and write)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
New columns might be added at the end of row in future.&lt;br /&gt;
&lt;br /&gt;
Separate stats exist for fuse and flush, that only report requests and sectors stats (others are always 0).&lt;br /&gt;
&lt;br /&gt;
Example of parsing code: parse_proc_iostat() function in [https://src.openvz.org/projects/OVZ/repos/vzstat/browse/src/vzstat.c vzstat.c]&lt;br /&gt;
&lt;br /&gt;
=== Queues ===&lt;br /&gt;
&lt;br /&gt;
Each beancounter may have many queues with requests. Typically there's one queue for each task with synchronous (e.g. reads) requests and and the fixed amount of them for asynchronous requests (e.g. cached writes) for each beancounter.&lt;br /&gt;
&lt;br /&gt;
== Interpretation ==&lt;br /&gt;
&lt;br /&gt;
=== Disk usage times ===&lt;br /&gt;
&lt;br /&gt;
The disk usage should be reported in a top-like style. Consider the following code&lt;br /&gt;
&lt;br /&gt;
 read_iostat(&amp;amp;a);&lt;br /&gt;
 sleep(interval);&lt;br /&gt;
 read_iostat(&amp;amp;b);&lt;br /&gt;
&lt;br /&gt;
Now the following numbers should be calculated and shown.&lt;br /&gt;
&lt;br /&gt;
 active  = sum(b.used_time - a.used_time) * 100 / interval;&lt;br /&gt;
 waiting = sum(b.wait_time - a.wait_time) * 100 / interval;&lt;br /&gt;
 idle    = 100 - (active + waiting);&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;sum&amp;lt;/code&amp;gt; function sums up the times for all disk for the beancounter.&lt;br /&gt;
&lt;br /&gt;
Additionally two more values should be shown for beancounter.&lt;br /&gt;
&lt;br /&gt;
=== IO speed ===&lt;br /&gt;
&lt;br /&gt;
The value&lt;br /&gt;
&lt;br /&gt;
 sum(b.transfered_sectors - a.transfered_sectors) * 512 / interval&lt;br /&gt;
&lt;br /&gt;
denotes the speed of the IO performed by the beancounter.&lt;br /&gt;
&lt;br /&gt;
=== Average request size ===&lt;br /&gt;
&lt;br /&gt;
The value&lt;br /&gt;
&lt;br /&gt;
 (b.transfered_sectors - a.transfered_sectors)/(b.requests_completed - a.requests_completed)&lt;br /&gt;
&lt;br /&gt;
denotes the average request size for a beancounter to a particular disk.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[IO accounting]]&lt;br /&gt;
* [[I/O priorities]]&lt;br /&gt;
* [[I/O limits]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=UBC_failcnt_reset&amp;diff=22542</id>
		<title>UBC failcnt reset</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=UBC_failcnt_reset&amp;diff=22542"/>
		<updated>2017-02-16T10:25:05Z</updated>

		<summary type="html">&lt;p&gt;Finist: bugzilla.openvz.org -&amp;gt; bugs.openvz.org&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;One of the frequently asked question is '''How do I reset failcnt in /proc/user_beancounters?''' While it's not suggested, it is possible.&lt;br /&gt;
&lt;br /&gt;
== What is failcnt? ==&lt;br /&gt;
There are a number of resource limits (called [[User Beancounters]], or UBC for short) set for a container. If one of those resources hit its limit, the appropriate fail counter (last column of &amp;lt;code&amp;gt;/proc/user_beancounters&amp;lt;/code&amp;gt;) increases. See [[Resource shortage]] for more info.&lt;br /&gt;
&lt;br /&gt;
== How to clear failcnt? ==&lt;br /&gt;
You do not need to, and this would be an incorrect thing to do.&lt;br /&gt;
&lt;br /&gt;
There can be many applications that read &amp;lt;code&amp;gt;/proc/user_beancounters&amp;lt;/code&amp;gt;, and thus if you will reset it, you may have problems with those other apps. Consider what happens if you reset your sent/received packets/bytes statistics on a network interface — programs which track it may not function properly. &lt;br /&gt;
&lt;br /&gt;
Therefore, the proper usage of failcnt is not to check whether it is zero or not, but to check whether it is increased since the previous readout. In other words, check the difference, not the absolute value. There is a utility {{Man|vzubc|8}} which can be used for that purpose. Also, see [[#Bash script]] below.&lt;br /&gt;
&lt;br /&gt;
== OK, I understand, but I still want to clear failcnt! ==&lt;br /&gt;
&lt;br /&gt;
UBC failcnts are stored for the duration of the uptime of your container. Thus, restarting the container resets the counts.&lt;br /&gt;
&lt;br /&gt;
The problem here is tcp time wait buckets can still be there after a container is stopped. You can check that by seeing the &amp;lt;code&amp;gt;held&amp;lt;/code&amp;gt; column for &amp;lt;code&amp;gt;kmemsize&amp;lt;/code&amp;gt; parameter. If it is not zero, that means you have to wait about 5 minutes in order to time wait buckets to expire, and the corresponding beancounter to be uncharged.&lt;br /&gt;
&lt;br /&gt;
If you still see failcnt not reset to 0 after more than 5 minutes after container is stopped, your kernel was likely compiled with CONFIG_UBC_KEEP_UNUSED=y, and in that case you'll have to switch off this option if you want to reset beancounters when container is restarted. &lt;br /&gt;
&lt;br /&gt;
If you're sure your kernel was NOT compiled with the above option and it's not resetting failcnt after 5 minutes, it means there is a bug in UBC code. Please file a detailed [[bug report]] to [http://bugs.openvz.org bugs.openvz.org]&lt;br /&gt;
&lt;br /&gt;
== vzubc ==&lt;br /&gt;
&lt;br /&gt;
{{Man|vzubc|8}} is a tool to show user beancounters in a decent human readable form. Its relative mode (option -r or --relative) is used to show the failcnt difference from the previous run. vzubc is available in vzctl package since vzctl-3.0.27.&lt;br /&gt;
&lt;br /&gt;
== Alternative bash script ==&lt;br /&gt;
&lt;br /&gt;
This script can show the failcount deltas for one or all containers since last reset, and reset the failcounts for one or all containers.&lt;br /&gt;
&lt;br /&gt;
It uses only standard commands and programs that are usually included in all linux distro's:&lt;br /&gt;
bash, cat, grep, awk, head, tail, printf&lt;br /&gt;
&lt;br /&gt;
=== Installation ===&lt;br /&gt;
&lt;br /&gt;
[[#beanc source code|Create the script]] and save it somewhere you can use it. This can either be in your home directory, or in a directory mentioned in your path variable (eg. /usr/sbin)&lt;br /&gt;
&lt;br /&gt;
The script is intended to be used as root because it needs write permissions in a directory /var/lib/beanc. If that directory doesn't exist it needs write permissions in /var/lib to be able to create that directory.&lt;br /&gt;
&lt;br /&gt;
You can also run it as any other user that has access to /proc/user_beancounters if you manually fix the permissions on /var/lib/beanc.&lt;br /&gt;
&lt;br /&gt;
=== Show the beancounters ===&lt;br /&gt;
 beanc show&lt;br /&gt;
this compares the contents of /proc/user_beancounters and the reference file, and shows you the delta value&lt;br /&gt;
if no reference file exists, a copy of the user_beancounters file is used.&lt;br /&gt;
to only show you the failcounts for 1 container, just add the ctid or container name to the command&lt;br /&gt;
 beanc show mailserver&lt;br /&gt;
 beanc show 102&lt;br /&gt;
&lt;br /&gt;
=== Reset failcounters ===&lt;br /&gt;
 beanc reset mailserver&lt;br /&gt;
 beanc reset 102&lt;br /&gt;
 beanc reset                   --&amp;gt; will reset failcounters for all containers&lt;br /&gt;
Confirmation will be asked.&lt;br /&gt;
&lt;br /&gt;
=== Show only failcounts &amp;gt; 0 ===&lt;br /&gt;
 beanc brief&lt;br /&gt;
&lt;br /&gt;
=== Initialize reference file ===&lt;br /&gt;
 beanc init&lt;br /&gt;
&lt;br /&gt;
This will check if the app-directory exists (/var/lib/beanc) and create it if necessary&lt;br /&gt;
It also creates a reference file (/var/lib/beanc/user_beancounters)&lt;br /&gt;
&lt;br /&gt;
BEWARE, this command will overwrite any existing reference file !&lt;br /&gt;
&lt;br /&gt;
=== beanc source code ===&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
Click a link to the right to view the code →&lt;br /&gt;
&amp;lt;source lang=bash class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#####################################################################################################################&lt;br /&gt;
#&lt;br /&gt;
# This script is intended to check the failcounts in user_beancounters on a OpenVZ machine.&lt;br /&gt;
# Since there is no solid way to reset the failcounts, this script maintains a copy of the file and shows delta's&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
# Filename: beanc&lt;br /&gt;
# Version : 0.1&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
# License:&lt;br /&gt;
# -------&lt;br /&gt;
#&lt;br /&gt;
# By using this script you agree there are absolutely no limitations on using it. Ofcourse there are also&lt;br /&gt;
# absolutely no guarantees. Please review the code to make sure it will work for you as expected.&lt;br /&gt;
#&lt;br /&gt;
# Feel free to distribute and/or modify the script.&lt;br /&gt;
#&lt;br /&gt;
# Only thing I will not appreciate is that you change my name into yours, and act like you wrote this script&lt;br /&gt;
# But hey, why would you do that ?  And how will I ever know ?&lt;br /&gt;
#&lt;br /&gt;
# If you make changes, decide to distribute the script or feel the urge to give me feedback, please let me know.&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
# Author(s):&lt;br /&gt;
# ---------&lt;br /&gt;
#&lt;br /&gt;
# Written by Steven Broos, 7/7/2011 in a boring RHEL course&lt;br /&gt;
#          Steven@Bit-IT.be&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
# Usage:&lt;br /&gt;
# -----&lt;br /&gt;
#&lt;br /&gt;
# 1. Copy the file to a location in your path, for example '/sbin/'&lt;br /&gt;
# 2. Make sure the file can be executed ('chmod 500 /sbin/beanc')&lt;br /&gt;
# 3. Use the script ;-)  At first execution the reference file will be created in '/var/lib/beanc/'&lt;br /&gt;
#     Note that the script is written for intended use by root, and on the OpenVZ host system&lt;br /&gt;
#     Possibly this script wont work in a cron-job or inside a container. This has not been tested&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
# Options:&lt;br /&gt;
# -------&lt;br /&gt;
#&lt;br /&gt;
# 1. Show the beancounters&lt;br /&gt;
#&lt;br /&gt;
#          beanc show&lt;br /&gt;
#&lt;br /&gt;
#     this compares the contents of /proc/user_beancounters and the reference file, and shows you the delta value&lt;br /&gt;
#     if no reference file exists, a copy of the user_beancounters file is used.&lt;br /&gt;
#     to only show you the failcounts for 1 container, just add the ctid or container name to the command&lt;br /&gt;
#&lt;br /&gt;
#          beanc show mailserver&lt;br /&gt;
#          beanc show 102&lt;br /&gt;
#&lt;br /&gt;
# 2. To reset the failcounters for a container (or all containers):&lt;br /&gt;
#&lt;br /&gt;
#          beanc reset mailserver&lt;br /&gt;
#          beanc reset 102&lt;br /&gt;
#          beanc reset                   --&amp;gt; will reset failcounters for all containers&lt;br /&gt;
#&lt;br /&gt;
#     Confirmation will be asked.&lt;br /&gt;
#&lt;br /&gt;
# 3. It is also possible to only show the failcounters &amp;gt; 0&lt;br /&gt;
#&lt;br /&gt;
#          beanc brief&lt;br /&gt;
#&lt;br /&gt;
# 4. If for some reason you want to manually initialize the reference file, you can execute&lt;br /&gt;
#&lt;br /&gt;
#          beanc init&lt;br /&gt;
#&lt;br /&gt;
#     This will check if the app-directory exists (/var/lib/beanc) and create it if necessary&lt;br /&gt;
#     It also creates a reference file (/var/lib/beanc/user_beancounters)&lt;br /&gt;
#     BEWARE, this command will overwrite any existing reference file !&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
#####################################################################################################################&lt;br /&gt;
####                                                                                                             ####&lt;br /&gt;
####  Declaration of some variables. Feel free to adjust                                                         ####&lt;br /&gt;
####                                                                                                             ####&lt;br /&gt;
#####################################################################################################################&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
rspath='/var/lib/beanc'&lt;br /&gt;
rsfile=&amp;quot;$rspath/user_beancounters&amp;quot;&lt;br /&gt;
lines=24&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
#####################################################################################################################&lt;br /&gt;
####                                                                                                             ####&lt;br /&gt;
####  Function declarations. See at the bottom of the script for execution                                       ####&lt;br /&gt;
####                                                                                                             ####&lt;br /&gt;
#####################################################################################################################&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# show brief help message&lt;br /&gt;
&lt;br /&gt;
function help ()&lt;br /&gt;
{&lt;br /&gt;
        echo &amp;quot;$0 { reset | show | brief | init } [ &amp;lt;vzid&amp;gt; | &amp;lt;vzname&amp;gt; ]&amp;quot;&lt;br /&gt;
        exit 0&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# check existence of path and reference file, and create if necessary&lt;br /&gt;
# This function is executed by every start of the script&lt;br /&gt;
&lt;br /&gt;
function init ()&lt;br /&gt;
{&lt;br /&gt;
        if [ ! -d &amp;quot;$rspath&amp;quot; ] || [ &amp;quot;$1&amp;quot; != &amp;quot;&amp;quot; ]&lt;br /&gt;
        then&lt;br /&gt;
                mkdir -p &amp;quot;$rspath&amp;quot;&lt;br /&gt;
        fi&lt;br /&gt;
        if [ ! -f &amp;quot;$rsfile&amp;quot; ] || [ &amp;quot;$1&amp;quot; != &amp;quot;&amp;quot; ]&lt;br /&gt;
        then&lt;br /&gt;
                cat /proc/user_beancounters &amp;gt; &amp;quot;$rsfile&amp;quot;&lt;br /&gt;
        fi&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Reset the failcounters by putting the current values from /proc/user_beancounters into the reference file&lt;br /&gt;
# either for all containers (cat &amp;gt; ref), or for one container (block per block)&lt;br /&gt;
&lt;br /&gt;
function reset ()&lt;br /&gt;
{&lt;br /&gt;
        if [ &amp;quot;$1&amp;quot; == &amp;quot;&amp;quot; ]&lt;br /&gt;
        then&lt;br /&gt;
                echo -n &amp;quot;Reset failcounts for all containers ? [y/N] &amp;quot;&lt;br /&gt;
                read -n 1 yn&lt;br /&gt;
                echo&lt;br /&gt;
                if [ &amp;quot;$yn&amp;quot; == &amp;quot;y&amp;quot; ]&lt;br /&gt;
                then&lt;br /&gt;
                        cat /proc/user_beancounters &amp;gt; &amp;quot;$rsfile&amp;quot;&lt;br /&gt;
                fi&lt;br /&gt;
        else&lt;br /&gt;
                echo -n &amp;quot;Reset failcounts for container '`vzname $1`' ($1) ? [y/N] &amp;quot;&lt;br /&gt;
                read -n 1 yn&lt;br /&gt;
                echo&lt;br /&gt;
                if [ &amp;quot;$yn&amp;quot; == &amp;quot;y&amp;quot; ]&lt;br /&gt;
                then&lt;br /&gt;
                        mv &amp;quot;$rsfile&amp;quot; &amp;quot;${rsfile}_&amp;quot;&lt;br /&gt;
                        for ctid in `vzlist -Ho ctid`&lt;br /&gt;
                        do&lt;br /&gt;
                                if [ $ctid -eq $1 ]&lt;br /&gt;
                                then&lt;br /&gt;
                                        echo &amp;quot;Resetting '`vzname $ctid`' ($ctid)&amp;quot;&lt;br /&gt;
                                        cat /proc/user_beancounters | getblock $ctid &amp;gt;&amp;gt; &amp;quot;$rsfile&amp;quot;&lt;br /&gt;
                                else&lt;br /&gt;
                                        echo &amp;quot;Keeping '`vzname $ctid`' ($ctid)&amp;quot;&lt;br /&gt;
                                        cat &amp;quot;${rsfile}_&amp;quot; | getblock $ctid &amp;gt;&amp;gt; &amp;quot;$rsfile&amp;quot;&lt;br /&gt;
                                fi&lt;br /&gt;
                        done&lt;br /&gt;
                        rm -f &amp;quot;${rsfile}_&amp;quot;&lt;br /&gt;
                fi&lt;br /&gt;
        fi&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Get one or more lines from the middle of the given text : returns the 'head' of a 'tail&amp;quot;&lt;br /&gt;
# $1 : start at line&lt;br /&gt;
# $2 : give this much lines (optional, default 1)&lt;br /&gt;
# example : cat /path/file | getline 10 5&lt;br /&gt;
&lt;br /&gt;
function getline ()&lt;br /&gt;
{&lt;br /&gt;
        start=$1&lt;br /&gt;
        length=$2&lt;br /&gt;
&lt;br /&gt;
        if [ &amp;quot;$length&amp;quot; == &amp;quot;&amp;quot; ]&lt;br /&gt;
        then&lt;br /&gt;
                length=1&lt;br /&gt;
        fi&lt;br /&gt;
&lt;br /&gt;
        cat - | head -n $(($start+$length-1)) | tail -n $length&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Get all beancounter values for a container&lt;br /&gt;
# $1 : ctid&lt;br /&gt;
# example : cat /proc/user_beancounters | getblock 102&lt;br /&gt;
&lt;br /&gt;
function getblock ()&lt;br /&gt;
{&lt;br /&gt;
        cat - &amp;gt; &amp;quot;$rspath/tmp&amp;quot;&lt;br /&gt;
        start=`cat -n &amp;quot;$rspath/tmp&amp;quot; | grep &amp;quot; $1:&amp;quot; | awk '{ print $1 }'`&lt;br /&gt;
        echo start $start&lt;br /&gt;
        cat &amp;quot;$rspath/tmp&amp;quot; | getline $start $lines&lt;br /&gt;
        rm -f &amp;quot;$rspath/tmp&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Show the contents from /proc/user_beancounters, and substitute the failcounts by a delta with the failcounts in&lt;br /&gt;
# the reference file&lt;br /&gt;
# $1 : ctid (optional, if none given all running containers are processed)&lt;br /&gt;
# example : show 102&lt;br /&gt;
&lt;br /&gt;
function show ()&lt;br /&gt;
{&lt;br /&gt;
        if [ &amp;quot;$1&amp;quot; == &amp;quot;&amp;quot; ]&lt;br /&gt;
        then&lt;br /&gt;
                for ctid in `vzlist -Ho ctid`&lt;br /&gt;
                do&lt;br /&gt;
                        show $ctid&lt;br /&gt;
                done&lt;br /&gt;
        else&lt;br /&gt;
                cstart=`cat /proc/user_beancounters -n | grep &amp;quot; $1:&amp;quot; | awk '{ print $1 }'`&lt;br /&gt;
                hstart=`cat &amp;quot;$rsfile&amp;quot; -n | grep &amp;quot; $1:&amp;quot; | awk '{ print $1 }'`&lt;br /&gt;
&lt;br /&gt;
                for ln in `seq 0 $(($lines-1))`&lt;br /&gt;
                do&lt;br /&gt;
                        current=`cat /proc/user_beancounters | getline $(($cstart+$ln))`&lt;br /&gt;
                        if [ &amp;quot;$hstart&amp;quot; == &amp;quot;&amp;quot; ]&lt;br /&gt;
                        then&lt;br /&gt;
                                history=''&lt;br /&gt;
                        else&lt;br /&gt;
                                history=`cat &amp;quot;$rsfile&amp;quot; | getline $(($hstart+$ln))`&lt;br /&gt;
                        fi&lt;br /&gt;
&lt;br /&gt;
                        if [ $ln -eq 0 ]&lt;br /&gt;
                        then&lt;br /&gt;
                                resource=`echo &amp;quot;$current&amp;quot; | awk '{ print $2 }'`&lt;br /&gt;
                                held=`echo &amp;quot;$current&amp;quot; | awk '{ print $3 }'`&lt;br /&gt;
                                maxheld=`echo &amp;quot;$current&amp;quot; | awk '{ print $4 }'`&lt;br /&gt;
                                barrier=`echo &amp;quot;$current&amp;quot; | awk '{ print $5 }'`&lt;br /&gt;
                                limit=`echo &amp;quot;$current&amp;quot; | awk '{ print $6 }'`&lt;br /&gt;
                                currfcnt=`echo &amp;quot;$current&amp;quot; | awk '{ print $7 }'`&lt;br /&gt;
                                histfcnt=`echo &amp;quot;$history&amp;quot; | awk '{ print $7 }'`&lt;br /&gt;
&lt;br /&gt;
                                fgcolor white&lt;br /&gt;
                                echo ' ----------------------------------------------------------------------------------------------------------------------&lt;br /&gt;
--'&lt;br /&gt;
                                printf &amp;quot;|%14s : %-12s %89s |\n&amp;quot; $1 `vzname $1` &amp;quot;`vzfqdn $1` (`vzip $1`)&amp;quot;&lt;br /&gt;
                                printf &amp;quot;|%14s%21s%21s%21s%21s%21s |\n&amp;quot; resource held maxheld barrier limit failcnt&lt;br /&gt;
                                echo ' ----------------------------------------------------------------------------------------------------------------------&lt;br /&gt;
--'&lt;br /&gt;
                                fgcolor reset&lt;br /&gt;
                        else&lt;br /&gt;
                                resource=`echo &amp;quot;$current&amp;quot; | awk '{ print $1 }'`&lt;br /&gt;
                                held=`echo &amp;quot;$current&amp;quot; | awk '{ print $2 }'`&lt;br /&gt;
                                maxheld=`echo &amp;quot;$current&amp;quot; | awk '{ print $3 }'`&lt;br /&gt;
                                barrier=`echo &amp;quot;$current&amp;quot; | awk '{ print $4 }'`&lt;br /&gt;
                                limit=`echo &amp;quot;$current&amp;quot; | awk '{ print $5 }'`&lt;br /&gt;
                                currfcnt=`echo &amp;quot;$current&amp;quot; | awk '{ print $6 }'`&lt;br /&gt;
                                histfcnt=`echo &amp;quot;$history&amp;quot; | awk '{ print $6 }'`&lt;br /&gt;
                        fi&lt;br /&gt;
                        if [ &amp;quot;$histfcnt&amp;quot; == &amp;quot;&amp;quot; ]&lt;br /&gt;
                        then&lt;br /&gt;
                                failcnt=$currfcnt&lt;br /&gt;
                        else&lt;br /&gt;
                                failcnt=$(($currfcnt-$histfcnt))&lt;br /&gt;
                        fi&lt;br /&gt;
&lt;br /&gt;
                        printf ' '&lt;br /&gt;
                        if [ $failcnt -gt 0 ]&lt;br /&gt;
                        then&lt;br /&gt;
                                bgcolor red&lt;br /&gt;
                                fgcolor white&lt;br /&gt;
                                echo -en '*'&lt;br /&gt;
                        else&lt;br /&gt;
                                fgcolor green&lt;br /&gt;
                                echo -en ' '&lt;br /&gt;
                        fi&lt;br /&gt;
                        printf &amp;quot;%13s%21s%21s%21s%21s%21s %.s&amp;quot; $resource $held $maxheld $barrier $limit $failcnt &amp;quot;($histfcnt-$currfcnt)&amp;quot;&lt;br /&gt;
                        fgcolor reset&lt;br /&gt;
                        bgcolor reset&lt;br /&gt;
                        echo&lt;br /&gt;
                done&lt;br /&gt;
        fi&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Shows only counters &amp;gt; 0, by grepping on '|' and '-' (for the header), and '*' (for the matching failcounter)&lt;br /&gt;
# If no '*' has been found, nothing for that container will be shown&lt;br /&gt;
&lt;br /&gt;
function showbrief ()&lt;br /&gt;
{&lt;br /&gt;
        echo 'Calculating ...'&lt;br /&gt;
        for ctid in `vzlist -Ho ctid`&lt;br /&gt;
        do&lt;br /&gt;
                result=`show $ctid`&lt;br /&gt;
                matches=`echo -en &amp;quot;$result&amp;quot; | grep '*' | wc -l`&lt;br /&gt;
                if [ $matches -gt 0 ]&lt;br /&gt;
                then&lt;br /&gt;
                        echo -en &amp;quot;$result&amp;quot; | grep -e '|' -e '*' -e '-'&lt;br /&gt;
                fi&lt;br /&gt;
        done&lt;br /&gt;
        echo 'Done!'&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# This function returns the ctid for a container.&lt;br /&gt;
# $1 : container-name or even the ctid itself, since you cannot know what value the user has given&lt;br /&gt;
&lt;br /&gt;
function vzid ()&lt;br /&gt;
{&lt;br /&gt;
        vzlist -o name,ctid | grep -e &amp;quot;^$1 &amp;quot; -e &amp;quot; $1$&amp;quot; | awk '{ print $2 }'&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Returns the IP for the given ctid&lt;br /&gt;
# $1 : ctid&lt;br /&gt;
&lt;br /&gt;
function vzip ()&lt;br /&gt;
{&lt;br /&gt;
        vzlist -o ctid,ip | grep &amp;quot; $1 &amp;quot; | awk '{ print $2 }'&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Returns the fully qualified domain name for the given ctid&lt;br /&gt;
# $1 : ctid&lt;br /&gt;
&lt;br /&gt;
function vzfqdn ()&lt;br /&gt;
{&lt;br /&gt;
        vzlist -o ctid,hostname | grep &amp;quot; $1 &amp;quot; | awk '{ print $2 }'&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Returns the short name for the given ctid&lt;br /&gt;
# $1 : ctid&lt;br /&gt;
&lt;br /&gt;
function vzname ()&lt;br /&gt;
{&lt;br /&gt;
        vzlist -o ctid,name | grep &amp;quot; $1 &amp;quot; | awk '{ print $2 }'&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# This function outputs the escape code for the specified textcolor&lt;br /&gt;
# $1 : Color by name (black, blue, ...)&lt;br /&gt;
#      Or reset to revert to the terminal's default colors&lt;br /&gt;
# example : fgcolor red; echo 'red text'; fgcolor reset&lt;br /&gt;
&lt;br /&gt;
function fgcolor ()&lt;br /&gt;
{&lt;br /&gt;
        case $1 in&lt;br /&gt;
                'black') echo -en &amp;quot;\033[1;30m&amp;quot; ;;&lt;br /&gt;
                'green') echo -en &amp;quot;\033[1;32m&amp;quot; ;;&lt;br /&gt;
                'red') echo -en &amp;quot;\033[1;31m&amp;quot; ;;&lt;br /&gt;
                'cyan') echo -en &amp;quot;\033[1;36m&amp;quot; ;;&lt;br /&gt;
                'white') echo -en &amp;quot;\033[1;37m&amp;quot; ;;&lt;br /&gt;
                'reset') tput sgr0 ;;&lt;br /&gt;
        esac&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# This function outputs the escape code for the specified backgroundcolor&lt;br /&gt;
# $1 : Color by name (black, blue, ...)&lt;br /&gt;
#      Or reset to revert to the terminal's default colors&lt;br /&gt;
# example : bgcolor green; echo 'highlighted text'; bgcolor reset&lt;br /&gt;
&lt;br /&gt;
function bgcolor ()&lt;br /&gt;
{&lt;br /&gt;
        case $1 in&lt;br /&gt;
                black) echo -en &amp;quot;\033[0;40m&amp;quot; ;;&lt;br /&gt;
                green) echo -en &amp;quot;\033[0;42m&amp;quot; ;;&lt;br /&gt;
                red) echo -en &amp;quot;\033[0;41m&amp;quot; ;;&lt;br /&gt;
                cyan) echo -en &amp;quot;\033[0;46m&amp;quot; ;;&lt;br /&gt;
                white) echo -en &amp;quot;\033[0;47m&amp;quot; ;;&lt;br /&gt;
                reset) tput sgr0 ;;&lt;br /&gt;
        esac&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
#####################################################################################################################&lt;br /&gt;
####                                                                                                             ####&lt;br /&gt;
####  Execution                                                                                                  ####&lt;br /&gt;
####                                                                                                             ####&lt;br /&gt;
#####################################################################################################################&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# If no options are given, show a brief help message&lt;br /&gt;
&lt;br /&gt;
if [ &amp;quot;$1&amp;quot; == &amp;quot;&amp;quot; ]&lt;br /&gt;
then&lt;br /&gt;
        help&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
# Check initialisation&lt;br /&gt;
&lt;br /&gt;
init&lt;br /&gt;
&lt;br /&gt;
# Get container ID (either by name or ID)&lt;br /&gt;
&lt;br /&gt;
vzid=`vzid $2`&lt;br /&gt;
&lt;br /&gt;
# If a container has been specified ($2) but none was found, give a warning and exit&lt;br /&gt;
&lt;br /&gt;
if [ &amp;quot;$vzid&amp;quot; == &amp;quot;&amp;quot; ] &amp;amp;&amp;amp; [ &amp;quot;$2&amp;quot; != &amp;quot;&amp;quot; ]&lt;br /&gt;
then&lt;br /&gt;
        echo &amp;quot;Container $2 not running&amp;quot;&lt;br /&gt;
        exit 1&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
# Check what to do and call the right functions&lt;br /&gt;
&lt;br /&gt;
case $1 in&lt;br /&gt;
        'init')&lt;br /&gt;
                init 1&lt;br /&gt;
                ;;&lt;br /&gt;
        'show')&lt;br /&gt;
                # If no container has been specifief, a lot of output can be expected&lt;br /&gt;
                # Automatically send it through 'less'. option '-R' makes sure formattion will be kept&lt;br /&gt;
&lt;br /&gt;
                if [ &amp;quot;$2&amp;quot; == &amp;quot;&amp;quot; ]&lt;br /&gt;
                then&lt;br /&gt;
                        show $vzid #| less -R&lt;br /&gt;
                else&lt;br /&gt;
                        show $vzid&lt;br /&gt;
                fi&lt;br /&gt;
                ;;&lt;br /&gt;
        'brief')&lt;br /&gt;
                showbrief&lt;br /&gt;
                ;;&lt;br /&gt;
        'reset')&lt;br /&gt;
                reset $vzid&lt;br /&gt;
                ;;&lt;br /&gt;
        *)&lt;br /&gt;
                help;&lt;br /&gt;
esac&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* {{Forum|361}}&lt;br /&gt;
* {{Forum|497}}&lt;br /&gt;
* http://sisyphus.ru/en/srpm/yabeda&lt;br /&gt;
&lt;br /&gt;
[[Category:FAQ]]&lt;br /&gt;
[[Category:UBC]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Docker_inside_CT_vz7&amp;diff=20945</id>
		<title>Docker inside CT vz7</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Docker_inside_CT_vz7&amp;diff=20945"/>
		<updated>2016-09-13T10:22:48Z</updated>

		<summary type="html">&lt;p&gt;Finist: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Since Virtuozzo 7 kernel 3.10.0-327.18.2.vz7.14.7 it is possible to run Docker inside containers.&lt;br /&gt;
&lt;br /&gt;
'''Please be aware that this feature is experimental and is not supported in production! We plan to make it production in the upcoming updates.'''&lt;br /&gt;
&lt;br /&gt;
'''This page is applicable for Virtuozzo 7''' (for Virtuozzo 6 see [[Docker inside CT | '''here''']]).&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
* Kernel 3.10.0-327.18.2.vz7.14.7 or later version&lt;br /&gt;
* Kernel modules '''veth''' and '''overlay''' loaded on host&lt;br /&gt;
&lt;br /&gt;
To enable '''veth''' and '''overlay''' modules please run:&lt;br /&gt;
 modprobe veth&lt;br /&gt;
 modprobe overlay &lt;br /&gt;
&lt;br /&gt;
'''Note:''' if you use 3.10.0-327.18.2.vz7.14.25 &amp;lt;= kernel &amp;lt;= 3.10.0-327.28.2.vz7.17.5, you need to allow using &amp;quot;overlayfs&amp;quot; inside a Virtuozzo Container:&lt;br /&gt;
 echo 1 &amp;gt; /proc/sys/fs/experimental_fs_enable&lt;br /&gt;
This was a temporary step, if you use kernel &amp;gt;= 3.10.0-327.28.2.vz7.17.6, overlayfs can be used inside a Container by default.&lt;br /&gt;
&lt;br /&gt;
== Limitations ==&lt;br /&gt;
&lt;br /&gt;
* Only '''overlay''' and '''vfs''' Docker graph drivers are currently supported. Recommended driver is '''overlay'''. To enable '''overlayfs''' Storage Driver for docker engine inside CT please read here https://docs.docker.com/engine/userguide/storagedriver/selectadriver/&lt;br /&gt;
* [[Checkpointing and live migration]] of a container with Docker containers inside is not supported yet (to be done)&lt;br /&gt;
&lt;br /&gt;
== Container tuning ==&lt;br /&gt;
&lt;br /&gt;
* Turn on '''bridge''' feature to allow docker creating bridged network inside container:&lt;br /&gt;
 prlctl set $veid --features bridge:on&lt;br /&gt;
* Allow all iptables modules to be used in containers:&lt;br /&gt;
 prlctl set $veid --netfilter=full&lt;br /&gt;
&lt;br /&gt;
== Docker install ==&lt;br /&gt;
&lt;br /&gt;
To install docker inside container please use Docker Installation Guide for your OS&lt;br /&gt;
https://docs.docker.com/v1.11/engine/installation/&lt;br /&gt;
&lt;br /&gt;
[[Category:HOWTO]]&lt;br /&gt;
[[Category: TRD]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Using_private_IPs_for_Hardware_Nodes&amp;diff=20924</id>
		<title>Using private IPs for Hardware Nodes</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Using_private_IPs_for_Hardware_Nodes&amp;diff=20924"/>
		<updated>2016-08-20T08:38:44Z</updated>

		<summary type="html">&lt;p&gt;Finist: Mark as applicable for OpenVZ 6 only.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''This page is applicable for OpenVZ 6''' (for Virtuozzo 7 documentation see [http://docs.openvz.org '''here''']).&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;This article describes how to assign public IPs to containers running on OVZ Hardware Nodes in case you have a following network topology:&lt;br /&gt;
&lt;br /&gt;
[[Image:PrivateIPs_fig1.gif|An initial network topology]]&lt;br /&gt;
&lt;br /&gt;
== Using a spare IP in the same range ==&lt;br /&gt;
If you have a spare IP to use, you could assign this as a subinterface and use this as nameserver:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN] ifconfig eth0:1 *.*.*.*&lt;br /&gt;
[HN] vzctl set 101 --nameserver *.*.*.*&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
This configuration was tested on a RHEL5 OpenVZ Hardware Node and a container based on a Fedora Core 5 template.&lt;br /&gt;
Other host OSs and templates might require some configuration changes, please add corresponding OS specific changes if you've faced any.&lt;br /&gt;
&lt;br /&gt;
This article assumes the presence of 'brctl', 'ip' and 'ifconfig' utils. You may need to install missing packages like 'bridge-utils'/'iproute'/'net-tools' or others which contain those utilities.&lt;br /&gt;
&lt;br /&gt;
This article assumes you have already [[Quick installation|installed OpenVZ]],&lt;br /&gt;
prepared the [[OS template cache]](s) and have&lt;br /&gt;
[[Basic_operations_in_OpenVZ_environment|container(s) created]]. If not, follow the links to perform the steps needed.&lt;br /&gt;
{{Note|don't assign an IP after container creation.}}&lt;br /&gt;
&lt;br /&gt;
== An OVZ Hardware Node has the only one Ethernet interface ==&lt;br /&gt;
(assume eth0)&lt;br /&gt;
&lt;br /&gt;
=== Hardware Node configuration ===&lt;br /&gt;
&lt;br /&gt;
{{Warning|if you are '''configuring''' the node '''remotely''' you '''must''' prepare a '''script''' with the below commands and run it in background with the redirected output or you'll '''lose the access''' to the Node.}}&lt;br /&gt;
&lt;br /&gt;
==== Create a bridge device ====&lt;br /&gt;
 [HN]# brctl addbr br0&lt;br /&gt;
&lt;br /&gt;
==== Remove an IP from eth0 interface ====&lt;br /&gt;
 [HN]# ifconfig eth0 0&lt;br /&gt;
&lt;br /&gt;
==== Add eth0 interface into the bridge ====&lt;br /&gt;
 [HN]# brctl addif br0 eth0&lt;br /&gt;
 &lt;br /&gt;
==== Assign the IP to the bridge ====&lt;br /&gt;
(the same that was assigned on eth0 earlier)&lt;br /&gt;
 [HN]# ifconfig br0 10.0.0.2/24&lt;br /&gt;
&lt;br /&gt;
==== Resurrect the default routing ====&lt;br /&gt;
 [HN]# ip route add default via 10.0.0.1 dev br0&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== A script example ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[HN]# cat /tmp/br_add &lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
brctl addbr br0&lt;br /&gt;
ifconfig eth0 0 &lt;br /&gt;
brctl addif br0 eth0 &lt;br /&gt;
ifconfig br0 10.0.0.2/24 &lt;br /&gt;
ip route add default via 10.0.0.1 dev br0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 [HN]# /tmp/br_add &amp;gt;/dev/null 2&amp;gt;&amp;amp;1 &amp;amp;&lt;br /&gt;
&lt;br /&gt;
=== Container configuration ===&lt;br /&gt;
&lt;br /&gt;
==== Start a container ====&lt;br /&gt;
 [HN]# vzctl start 101&lt;br /&gt;
&lt;br /&gt;
==== Add a [[Virtual_Ethernet_device|veth interface]] to the container ====&lt;br /&gt;
 [HN]# vzctl set 101 --netif_add eth0 --save&lt;br /&gt;
&lt;br /&gt;
==== Set up an IP to the newly created container's veth interface ====&lt;br /&gt;
 [HN]# vzctl exec 101 ifconfig eth0 85.86.87.195/26&lt;br /&gt;
 &lt;br /&gt;
==== Add the container's veth interface to the bridge ====&lt;br /&gt;
 [HN]# brctl addif br0 veth101.0&lt;br /&gt;
&lt;br /&gt;
{{Note|There will be a delay of about 15 seconds(default for 2.6.18 kernel) while the bridge software runs STP to detect loops and transitions the veth interface to the forwarding state.&lt;br /&gt;
&amp;lt;!-- /sys/class/net/$BR_NAME/bridge/forward_delay in SEC*USER_HZ --&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
==== Set up the default route for the container ====&lt;br /&gt;
 [HN]# vzctl exec 101 ip route add default via 85.86.87.193 dev eth0&lt;br /&gt;
 &lt;br /&gt;
==== (Optional) Add CT↔HN routes ====&lt;br /&gt;
The above configuration provides the following connections:&lt;br /&gt;
* CT X ↔ CT Y (where CT X and CT Y can locate on any OVZ HN)&lt;br /&gt;
* CT   ↔ Internet&lt;br /&gt;
&lt;br /&gt;
Note that&lt;br /&gt;
&lt;br /&gt;
* The accessability of the CT from the HN depends on the local gateway providing NAT (probably - yes)&lt;br /&gt;
&lt;br /&gt;
* The accessability of the HN from the CT depends on the ISP gateway being aware of the local network (probably not)&lt;br /&gt;
&lt;br /&gt;
So to provide CT ↔ HN accessibility despite the gateways' configuration you can add the following routes:&lt;br /&gt;
&lt;br /&gt;
 [HN]# ip route add 85.86.87.195 dev br0&lt;br /&gt;
 [HN]# vzctl exec 101 ip route add 10.0.0.2 dev eth0&lt;br /&gt;
&lt;br /&gt;
=== Resulting OpenVZ Node configuration ===&lt;br /&gt;
[[Image:PrivateIPs_fig2.gif|Resulting OpenVZ Node configuration]]&lt;br /&gt;
&lt;br /&gt;
=== Making the configuration persistent ===&lt;br /&gt;
&lt;br /&gt;
==== Set up a bridge on a HN ====&lt;br /&gt;
This can be done by configuring the &amp;lt;code&amp;gt;ifcfg-*&amp;lt;/code&amp;gt; files located in &amp;lt;code&amp;gt;/etc/sysconfig/network-scripts/&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Assuming you had a configuration file (e.g. &amp;lt;code&amp;gt;ifcfg-eth0&amp;lt;/code&amp;gt;) like:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
DEVICE=eth0&lt;br /&gt;
ONBOOT=yes&lt;br /&gt;
IPADDR=10.0.0.2&lt;br /&gt;
NETMASK=255.255.255.0&lt;br /&gt;
GATEWAY=10.0.0.1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To automatically create bridge &amp;lt;code&amp;gt;br0&amp;lt;/code&amp;gt;  you can create &amp;lt;code&amp;gt;ifcfg-br0&amp;lt;/code&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
DEVICE=br0&lt;br /&gt;
TYPE=Bridge&lt;br /&gt;
ONBOOT=yes&lt;br /&gt;
IPADDR=10.0.0.2&lt;br /&gt;
NETMASK=255.255.255.0&lt;br /&gt;
GATEWAY=10.0.0.1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and edit &amp;lt;code&amp;gt;ifcfg-eth0&amp;lt;/code&amp;gt; to add the &amp;lt;code&amp;gt;eth0&amp;lt;/code&amp;gt; interface into the bridge &amp;lt;code&amp;gt;br0&amp;lt;/code&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
DEVICE=eth0&lt;br /&gt;
ONBOOT=yes&lt;br /&gt;
BRIDGE=br0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Edit the container's configuration ====&lt;br /&gt;
Add these parameters to the &amp;lt;code&amp;gt;/etc/vz/conf/$CTID.conf&amp;lt;/code&amp;gt; file which will be used during the network configuration:&lt;br /&gt;
* Add &amp;lt;code&amp;gt;VETH_IP_ADDRESS=&amp;quot;IP/MASK&amp;quot;&amp;lt;/code&amp;gt; (a container can have multiple IPs separated by spaces)&lt;br /&gt;
* Add &amp;lt;code&amp;gt;VE_DEFAULT_GATEWAY=&amp;quot;CT DEFAULT GATEWAY&amp;quot;&amp;lt;/code&amp;gt;&lt;br /&gt;
* Add &amp;lt;code&amp;gt;BRIDGEDEV=&amp;quot;BRIDGE NAME&amp;quot;&amp;lt;/code&amp;gt; (a bridge name to which the container veth interface should be added)&lt;br /&gt;
&lt;br /&gt;
An example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Network customization section&lt;br /&gt;
VETH_IP_ADDRESS=&amp;quot;85.86.87.195/26&amp;quot;&lt;br /&gt;
VE_DEFAULT_GATEWAY=&amp;quot;85.86.87.193&amp;quot;&lt;br /&gt;
BRIDGEDEV=&amp;quot;br0&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Create a custom network configuration script ====&lt;br /&gt;
which should be called each time a container is started (e.g. &amp;lt;code&amp;gt;/usr/sbin/vznetcfg.custom&amp;lt;/code&amp;gt;):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# /usr/sbin/vznetcfg.custom&lt;br /&gt;
# a script to bring up bridged network interfaces (veth's) in a container&lt;br /&gt;
&lt;br /&gt;
GLOBALCONFIGFILE=/etc/vz/vz.conf&lt;br /&gt;
CTCONFIGFILE=/etc/vz/conf/$VEID.conf&lt;br /&gt;
vzctl=/usr/sbin/vzctl&lt;br /&gt;
brctl=/usr/sbin/brctl&lt;br /&gt;
ip=/sbin/ip&lt;br /&gt;
ifconfig=/sbin/ifconfig&lt;br /&gt;
. $GLOBALCONFIGFILE&lt;br /&gt;
. $CTCONFIGFILE&lt;br /&gt;
&lt;br /&gt;
NETIF_OPTIONS=`echo $NETIF | sed 's/,/\n/g'`&lt;br /&gt;
for str in $NETIF_OPTIONS; do \&lt;br /&gt;
        # getting 'ifname' parameter value&lt;br /&gt;
        if  echo &amp;quot;$str&amp;quot; | grep -o &amp;quot;^ifname=&amp;quot; ; then&lt;br /&gt;
                # remove the parameter name from the string (along with '=')&lt;br /&gt;
                CTIFNAME=${str#*=};&lt;br /&gt;
        fi&lt;br /&gt;
        # getting 'host_ifname' parameter value&lt;br /&gt;
        if  echo &amp;quot;$str&amp;quot; | grep -o &amp;quot;^host_ifname=&amp;quot; ; then&lt;br /&gt;
                # remove the parameter name from the string (along with '=')&lt;br /&gt;
                VZHOSTIF=${str#*=};&lt;br /&gt;
        fi&lt;br /&gt;
done&lt;br /&gt;
&lt;br /&gt;
if [ ! -n &amp;quot;$VETH_IP_ADDRESS&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;According to $CONFIGFILE CT$VEID has no veth IPs configured.&amp;quot;&lt;br /&gt;
   exit 1&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
if [ ! -n &amp;quot;$VZHOSTIF&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;According to $CONFIGFILE CT$VEID has no veth interface configured.&amp;quot;&lt;br /&gt;
   exit 1&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
if [ ! -n &amp;quot;$CTIFNAME&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;Corrupted $CONFIGFILE: no 'ifname' defined for host_ifname $VZHOSTIF.&amp;quot;&lt;br /&gt;
   exit 1&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Initializing interface $VZHOSTIF for CT$VEID.&amp;quot;&lt;br /&gt;
$ifconfig $VZHOSTIF 0&lt;br /&gt;
&lt;br /&gt;
CTROUTEDEV=$VZHOSTIF&lt;br /&gt;
&lt;br /&gt;
if [ -n &amp;quot;$BRIDGEDEV&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;Adding interface $VZHOSTIF to the bridge $BRIDGEDEV.&amp;quot;&lt;br /&gt;
   CTROUTEDEV=$BRIDGEDEV&lt;br /&gt;
   $brctl addif $BRIDGEDEV $VZHOSTIF&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
# Up the interface $CTIFNAME link in CT$VEID&lt;br /&gt;
$vzctl exec $VEID $ip link set $CTIFNAME up&lt;br /&gt;
&lt;br /&gt;
for IP in $VETH_IP_ADDRESS; do&lt;br /&gt;
   echo &amp;quot;Adding an IP $IP to the $CTIFNAME for CT$VEID.&amp;quot;&lt;br /&gt;
   $vzctl exec $VEID $ip address add $IP dev $CTIFNAME&lt;br /&gt;
&lt;br /&gt;
   # removing the netmask&lt;br /&gt;
   IP_STRIP=${IP%%/*};&lt;br /&gt;
&lt;br /&gt;
   echo &amp;quot;Adding a route from CT0 to CT$VEID using $IP_STRIP.&amp;quot;&lt;br /&gt;
   $ip route add $IP_STRIP dev $CTROUTEDEV&lt;br /&gt;
done&lt;br /&gt;
&lt;br /&gt;
if [ -n &amp;quot;$CT0_IP&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;Adding a route from CT$VEID to CT0.&amp;quot;&lt;br /&gt;
   $vzctl exec $VEID $ip route add $CT0_IP dev $CTIFNAME&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
if [ -n &amp;quot;$VE_DEFAULT_GATEWAY&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;Setting $VE_DEFAULT_GATEWAY as a default gateway for CT$VEID.&amp;quot;&lt;br /&gt;
   $vzctl exec $VEID \&lt;br /&gt;
        $ip route add default via $VE_DEFAULT_GATEWAY dev $CTIFNAME&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
exit 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;small&amp;gt;Note: this script can be easily extended to work for multiple triples &amp;amp;lt;bridge, ip address, veth device&amp;amp;gt;, see http://sysadmin-ivanov.blogspot.com/2008/02/2-veth-with-2-bridges-on-openvz-at.html &amp;lt;/small&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Make the script to be run on a container start ====&lt;br /&gt;
In order to run above script on a container start create the file&lt;br /&gt;
&amp;lt;code&amp;gt;/etc/vz/vznet.conf&amp;lt;/code&amp;gt; with the following contents:&lt;br /&gt;
&lt;br /&gt;
 EXTERNAL_SCRIPT=&amp;quot;/usr/sbin/vznetcfg.custom&amp;quot;&lt;br /&gt;
&lt;br /&gt;
{{Note|&amp;lt;code&amp;gt;/usr/sbin/vznetcfg.custom&amp;lt;/code&amp;gt; should be executable (chmod +x /usr/sbin/vznetcfg.custom)}}&lt;br /&gt;
&lt;br /&gt;
{{Note|When CT is stoped there are HW → CT route(s) still present in route table. We can use On-umount script for solve this.}}&lt;br /&gt;
&lt;br /&gt;
==== Create On-umount script for remove HW → CT route(s) ====&lt;br /&gt;
which should be called each time a container with VEID (&amp;lt;code&amp;gt;/etc/vz/conf/$VEID.umount&amp;lt;/code&amp;gt;), or any container (&amp;lt;code&amp;gt;/etc/vz/conf/vps.umount&amp;lt;/code&amp;gt;) is stopped.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# /etc/vz/conf/$VEID.umount or /etc/vz/conf/vps.umount&lt;br /&gt;
# a script to remove routes to container with veth-bridge from bridge &lt;br /&gt;
&lt;br /&gt;
CTCONFIGFILE=/etc/vz/conf/$VEID.conf&lt;br /&gt;
ip=/sbin/ip&lt;br /&gt;
. $CTCONFIGFILE&lt;br /&gt;
&lt;br /&gt;
if [ ! -n &amp;quot;$VETH_IP_ADDRESS&amp;quot; ]; then&lt;br /&gt;
   exit 0&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
if [ ! -n &amp;quot;$BRIDGEDEV&amp;quot; ]; then&lt;br /&gt;
   exit 0&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
for IP in $VETH_IP_ADDRESS; do&lt;br /&gt;
   # removing the netmask&lt;br /&gt;
   IP_STRIP=${IP%%/*};&lt;br /&gt;
   &lt;br /&gt;
   echo &amp;quot;Remove a route from CT0 to CT$VEID using $IP_STRIP.&amp;quot;&lt;br /&gt;
   $ip route del $IP_STRIP dev $BRIDGEDEV&lt;br /&gt;
done&lt;br /&gt;
&lt;br /&gt;
exit 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Note|The script should be executable (chmod +x /etc/vz/conf/vps.umount)}}&lt;br /&gt;
&lt;br /&gt;
==== Setting the route CT → HN ====&lt;br /&gt;
To set up a route from the CT to the HN, the custom script has to get a HN IP (the $CT0_IP variable in the script). There are several ways to specify it:&lt;br /&gt;
&lt;br /&gt;
# Add an entry CT0_IP=&amp;quot;CT0 IP&amp;quot; to the &amp;lt;code&amp;gt;$VEID.conf&amp;lt;/code&amp;gt;&lt;br /&gt;
# Add an entry CT0_IP=&amp;quot;CT0 IP&amp;quot; to the &amp;lt;code&amp;gt;/etc/vz/vz.conf&amp;lt;/code&amp;gt; (the global configuration config file)&lt;br /&gt;
# Implement some smart algorithm to determine the CT0 IP right in the custom network configuration script&lt;br /&gt;
&lt;br /&gt;
Each variant has its pros and cons, nevertheless for HN static IP configuration variant 2 seems to be acceptable (and the most simple).&lt;br /&gt;
&lt;br /&gt;
== An OpenVZ Hardware Node has two Ethernet interfaces ==&lt;br /&gt;
Assuming you have 2 interfaces eth0 and eth1 and want to separate local traffic (10.0.0.0/24) from  external traffic.&lt;br /&gt;
Let's assign eth0 for the external traffic and eth1 for the local one.&lt;br /&gt;
&lt;br /&gt;
If there is no need to make the container accessible from the HN and vice versa, it's enough to replace 'br0' with 'eth1' in the following steps of the above configuration:&lt;br /&gt;
* Hardware Node configuration → [[Using_private_IPs_for_Hardware_Nodes#Assign_the_IP_to_the_bridge|Assign the IP to the bridge]]&lt;br /&gt;
* Hardware Node configuration → [[Using_private_IPs_for_Hardware_Nodes#Resurrect_the_default_routing|Resurrect the default routing]]&lt;br /&gt;
&lt;br /&gt;
It is nesessary to set a local IP for 'br0' to ensure CT ↔ HN connection availability.&lt;br /&gt;
&lt;br /&gt;
== Putting containers to different subnetworks ==&lt;br /&gt;
It's enough to set up the correct $VETH_IP_ADDRESS and $VE_DEFAULT_GATEWAY values in the &lt;br /&gt;
[[Using_private_IPs_for_Hardware_Nodes#Edit_the_container.27s_configuration|above configuration]].&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Virtual network device]]&lt;br /&gt;
* [[Differences between venet and veth]]&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;br /&gt;
[[Category: Networking]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Comparison&amp;diff=19989</id>
		<title>Comparison</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Comparison&amp;diff=19989"/>
		<updated>2016-08-11T07:27:46Z</updated>

		<summary type="html">&lt;p&gt;Finist: Virtuozzo 7 EOL - in 7 years&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Note|This comparison doesn't include Docker, because Docker is not a virtualization solution. It automates the deployment of applications inside software containers, by providing an additional layer of abstraction and automation of operating-system-level virtualization.&amp;lt;ref&amp;gt;[https://en.wikipedia.org/wiki/Docker_(software) Wikipedia article about Docker]&amp;lt;/ref&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
The information regarding [[Virtuozzo]] 7 are provided by [http://www.virtuozzo.com Virtuozzo]. Here is the Virtuozzo's statement regarding this information:&lt;br /&gt;
&lt;br /&gt;
:#The information contained herein is intended to outline general product direction and should not be relied upon in making purchasing decisions.&lt;br /&gt;
:#The content is for informational purposes only and may not be incorporated into any contract.&lt;br /&gt;
:#The information presented is not a commitment, promise, or legal obligation to deliver any material, code or functionality.&lt;br /&gt;
:#Any references to the development, release, and timing of any features or functionality described for these products remains at Virtuozzo’s sole discretion.&lt;br /&gt;
:#Product capabilities, timeframes and features are subject to change and should not be viewed as Virtuozzo commitments.&lt;br /&gt;
&lt;br /&gt;
The information regarding all other solutions are taken by authors from public sources only. This information can be changed by any OpenVZ Wiki user without any notice and author's review or approval.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Feature comparison of different virtualization solutions ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
! Feature&lt;br /&gt;
! Description&lt;br /&gt;
! OpenVZ&lt;br /&gt;
! Virtuozzo&amp;amp;nbsp;6 (PCS 6)&lt;br /&gt;
! OpenVZ&amp;amp;nbsp;7&lt;br /&gt;
! Virtuozzo&amp;amp;nbsp;7&lt;br /&gt;
! LXC&lt;br /&gt;
! Proxmox VE&lt;br /&gt;
! Microsoft Hyper-V 2012 R2&lt;br /&gt;
! RHEV 3.5&lt;br /&gt;
! Citrix XenServer 6.5&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|1. Virtualization platform&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.1. Overview&lt;br /&gt;
|-&lt;br /&gt;
|'''HW virtualization support (Hypervisor)'''&lt;br /&gt;
|Full emulation of underneath hardware level: full isolation guest environment, no dependencies from host OS, overhead for hypervisor layer.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''OS-level virtualization (Containers)'''&lt;br /&gt;
|Sharing the same instance of host OS: high density, high performance, high dependencies from host OS.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Hypervisor technology'''&lt;br /&gt;
|Technology that enables to run Virtual Machines.&lt;br /&gt;
|None&lt;br /&gt;
|Parallels Desktop Monitor&lt;br /&gt;
|KVM&lt;br /&gt;
|KVM&lt;br /&gt;
|None&lt;br /&gt;
|KVM&lt;br /&gt;
|Hyper-V&lt;br /&gt;
|KVM&lt;br /&gt;
|Xen&lt;br /&gt;
|-&lt;br /&gt;
|'''Windows guest OS additional support'''&lt;br /&gt;
|WHQL-signed drivers, SVVP certification&lt;br /&gt;
|None&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|None&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Containers technology'''&lt;br /&gt;
|Technology that enables to run Containers.&lt;br /&gt;
|Virtuozzo Containers&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Virtuozzo Containers with enhancements&lt;br /&gt;
|Linux containers&lt;br /&gt;
|LXC (moved from OpenVZ since 4.0)&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; align=&amp;quot;left&amp;quot;|1.2. Memory&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory Overcommit'''&lt;br /&gt;
|Ability to present more memory to virtual machines than physically available &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Page sharing'''&lt;br /&gt;
|Memory (RAM) savings through sharing identical memory pages across virtual machines&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, only for CTs&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|2. Management&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.1. General&lt;br /&gt;
|-&lt;br /&gt;
|'''Unified management tool for CTs and VMs'''&lt;br /&gt;
|Single tool for managing both containers and virtual machines (if applicable)&lt;br /&gt;
|None&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|None&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''OpenStack integration'''&lt;br /&gt;
|Integration with OpenStack components ([http://docs.openstack.org/developer/nova/support-matrix.html see details])&lt;br /&gt;
|{{Yes}}, only Nova&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Central Management tool'''&lt;br /&gt;
|Is centralized multi-server management available for this edition?&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}, Parallels Virtual Automation (PVA)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtual Automator (coming soon)&lt;br /&gt;
|{{Yes}}, 3rd party&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, System Center Virtual Machine Manager&lt;br /&gt;
|{{Yes}}, RHEV Manager&lt;br /&gt;
|{{Yes}}, XenCenter&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.2. Upgrade &amp;amp; Backup&lt;br /&gt;
|-&lt;br /&gt;
|'''Update Management'''&lt;br /&gt;
|Integrated patching mechanism for the virtual environments (Guest OS) / guest tools / templates&lt;br /&gt;
|No integrated update, YUM (Linux)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, APT (Linux)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|Yes (WSUS, SCCM, Virtual Machine Servicing Tool 2012 for offline VM update)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|No integrated update, YUM (Linux), WSUS (Windows)&lt;br /&gt;
|-&lt;br /&gt;
|'''Live VE snapshot'''&lt;br /&gt;
|Ability to take a snapshot of a virtual environment while the guest OS is running (e.g. for roll-back or backup purposes)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|Offline, CRIU support is planned&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated Backup'''&lt;br /&gt;
|Are backup plugins/tools provided to backup virtual environments (over and above the ability to perform classic backup using agents in the guests)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Backup Integration API'''&lt;br /&gt;
|Integration with 3rd party backup applications for backup of the virtual environment.&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}} (only through snapshots)&lt;br /&gt;
|{{Yes}} (vzdump)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|2.3. Others&lt;br /&gt;
|-&lt;br /&gt;
|'''VEs Templates (VM, CT)'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|{{Yes}} (CT only)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}} (OpenVZ templates)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''P2V migration'''&lt;br /&gt;
|Integrated or added P2V (or V2V) capability in order to convert physical systems to virtual environment.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}, 3rd party tools&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|3. VE Mobility and HA&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.1. VE Mobility&lt;br /&gt;
|-&lt;br /&gt;
|'''Live Migration'''&lt;br /&gt;
|Ability to migrate virtual machines between hosts without perceived downtime&lt;br /&gt;
|{{Yes}}, but with no zero downtime&lt;br /&gt;
|{{Yes}}, Kernal-Level Migration&lt;br /&gt;
|{{Yes}}, CRIU&lt;br /&gt;
|{{Yes}}, CRIU&lt;br /&gt;
|Offline, CRIU support is planned&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''DRS/Host Maintenance Mode'''&lt;br /&gt;
|Ability to put host into maintenance mode which will automatically live migrate all VEs onto other available hosts so that the host can be brought shut down safely&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Distributed Power Management'''&lt;br /&gt;
|Distributed Power Management features Ability to automatically migrate VEs onto fewer hosts and power off unused capacity (hosts), wake systems back up when required &lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|3.2. HA / DR&lt;br /&gt;
|-&lt;br /&gt;
|'''Cluster size'''&lt;br /&gt;
|Maximum number of hosts in the cluster/pool relationship and maximum number VEs per cluster/pool (if specified)&lt;br /&gt;
|None&lt;br /&gt;
|32 hosts/cluster validated (100 hosts/cluster maximum) - PStorage limitation&lt;br /&gt;
|Not tested yet&lt;br /&gt;
|Not tested yet&lt;br /&gt;
|None&lt;br /&gt;
|32 nodes&lt;br /&gt;
|64 nodes&lt;br /&gt;
|200 nodes&lt;br /&gt;
|16 nodes&lt;br /&gt;
|-&lt;br /&gt;
|'''Integrated HA'''&lt;br /&gt;
|Recover virtual environment in case of host failures through restart on alternative hosts (downtime = restart time)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Site Failover'''&lt;br /&gt;
|Integrated ability to (ideally live) migrate virtual machine data (virtual disk files) to different storage e.g. for array upgrades/migration and I/O management&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|Manual&lt;br /&gt;
|{{No}}&lt;br /&gt;
|Integrated Disaster Recovery - manual&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|4. Network and Storage&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.1. Storage&lt;br /&gt;
|-&lt;br /&gt;
|'''Supported Storage'''&lt;br /&gt;
|Supported types of Storage (DAS, NAS or SAN)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|NAS (NFS), DAS (EXT4)&lt;br /&gt;
|DAS, NAS (NFS, ZFS), SAN (iSCSI), Ceph &lt;br /&gt;
|DAS, NAS (SMB), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|DAS, NAS (NFS), SAN (iSCSI, FC, FCoE)&lt;br /&gt;
|-&lt;br /&gt;
|'''Virtual Disk Format'''&lt;br /&gt;
|Supported format(s) of the virtual disks for the virtual machines&lt;br /&gt;
|CT - [[ploop]]&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]\Qcow2&lt;br /&gt;
|CT - [[ploop]], VM - [[ploop]]\Qcow2&lt;br /&gt;
|Any&lt;br /&gt;
|qcow2, vmdk, raw&lt;br /&gt;
|vhdx, vhd, pass-though (raw)&lt;br /&gt;
|Qcow2, raw disk&lt;br /&gt;
|vhd, raw disk&lt;br /&gt;
|-&lt;br /&gt;
|'''Thin Disk Provisioning'''&lt;br /&gt;
|Ability to over-commit overall disk space by dynamically growing the size of virtual disks based on actual usage rather than pre-allocating full size.&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, depends on disk format (dm-thin)&lt;br /&gt;
|{{Yes}} (depends on underlying storage driver)&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Virtual SAN'''&lt;br /&gt;
|Enhanced storage capability  e.g. providing a virtual SAN through virtualized 'local' storage &lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Virtuozzo Storage&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS)&lt;br /&gt;
|{{Yes}}, but 3rd party (DRBD 9, Ceph, GlusterFS, sheepdog)&lt;br /&gt;
|{{Yes}}, Storage Spaces&lt;br /&gt;
|{{Yes}}, Red Hat Storage&lt;br /&gt;
|{{No}}&lt;br /&gt;
|-&lt;br /&gt;
|'''Storage QoS'''&lt;br /&gt;
|Ability to control Quality of Service for Storage I/O or Throughput for CT/VM&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, VMs only&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot;|4.2. Network&lt;br /&gt;
|-&lt;br /&gt;
|'''Advanced Network Switch'''&lt;br /&gt;
|Centralized virtual network configuration (rather than managing virtual switches on individual hosts), typically with enhanced networking capabilities&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, Open vSwitch support&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|OpenStack Neutron Integration&lt;br /&gt;
|Open vSwitch integration&lt;br /&gt;
|-&lt;br /&gt;
|'''Network QoS'''&lt;br /&gt;
|Ability to create and store master images and deploy virtual machines from them&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|Only bandwidth limits&lt;br /&gt;
|{{Yes}}, with Open vSwitch &lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|5. Others – most of features are relevant only for Virtuozzo editions&lt;br /&gt;
|-&lt;br /&gt;
|'''Memory deduplication for binary files'''&lt;br /&gt;
|Memory and IOPS deduplication management that enables/disables caching for Container directories and files, verifies cache integrity, checks Containers for cache errors, and purges the cache if needed&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}, pfcache&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''Completely isolated disk subsystem for CTs'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, ploop&lt;br /&gt;
|{{Yes}}, with LVM&lt;br /&gt;
|{{Yes}}, LVM, ZFS, or loop devices&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''API\SDK'''&lt;br /&gt;
|&lt;br /&gt;
|OpenVZ API for Ruby, LibVirt&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|Virtuozzo SDK, [[LibVirt]]&lt;br /&gt;
|LibLXC, API for Ruby, Python 2, Haskell, Go&lt;br /&gt;
|Proxmox VE uses a REST like API (JSON data format)&lt;br /&gt;
|Windows SDK&lt;br /&gt;
|RHEV-M API: REST API, SDKs&lt;br /&gt;
|XenAPI, XenServer SDKs&lt;br /&gt;
|-&lt;br /&gt;
|'''Image Catalog integration'''&lt;br /&gt;
|Integration with 3rd-party image catalog services of popular server applications and development environments that can be installed with one click.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} [https://virtuozzo.com/introducing-the-virtuozzo-application-catalog/ Virtuozzo Application Catalog]&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} (Turnkey)&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''Kernel maintenance'''&lt;br /&gt;
|Ability to upgrade kernel with minimal downtime.&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|kernel rebootless update (vzreboot)&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}} [https://readykernel.com/ ReadyKernel Service]&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|None&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|KernelCare service integration&lt;br /&gt;
|-&lt;br /&gt;
|'''Power Panel'''&lt;br /&gt;
|A tool used for managing particular virtual machines and containers by their end users.&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|None&lt;br /&gt;
|-&lt;br /&gt;
|'''Secure for using in public networks'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&amp;lt;ref name=&amp;quot;LXC security&amp;quot;&amp;gt;[https://service.ait.ac.at/security/2015/LxcSecurityAnalysis.txt LXC Security Analysis]&amp;lt;/ref&amp;gt;, &amp;lt;ref name=&amp;quot;Security issues and mitigations with lxc&amp;quot;&amp;gt;[https://wiki.ubuntu.com/LxcSecurity Security issues and mitigations with LXC]&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{No}}&amp;lt;ref name=&amp;quot;LXC security&amp;quot;&amp;gt;[https://service.ait.ac.at/security/2015/LxcSecurityAnalysis.txt LXC Security Analysis]&amp;lt;/ref&amp;gt;, &amp;lt;ref name=&amp;quot;Security issues and mitigations with lxc&amp;quot;&amp;gt;[https://wiki.ubuntu.com/LxcSecurity Security issues and mitigations with LXC]&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|-&lt;br /&gt;
!colspan=&amp;quot;11&amp;quot; style=&amp;quot;font-style:bold;background-color:gold;&amp;quot;|6. Commercial&lt;br /&gt;
|-&lt;br /&gt;
|'''Open Source'''&lt;br /&gt;
|&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}} (but there is Open Source edition(oVirt))&lt;br /&gt;
|{{No}} (but there is Open Source edition)&lt;br /&gt;
|-&lt;br /&gt;
|'''License\Subscription'''&lt;br /&gt;
|&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{No}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}&lt;br /&gt;
|{{Yes}}, Enterprise Edition&lt;br /&gt;
|-&lt;br /&gt;
|'''Support'''&lt;br /&gt;
|&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Community support&lt;br /&gt;
|Commercial Support&lt;br /&gt;
|Yes, Canonical Ltd.&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Commercial support&lt;br /&gt;
|Both community and commercial support&lt;br /&gt;
|-&lt;br /&gt;
|'''EOL policy'''&lt;br /&gt;
|&lt;br /&gt;
|[[Releases|5 years of support]]&lt;br /&gt;
|[https://virtuozzo.com/support/server-lifecycle/ 7 years of support]&lt;br /&gt;
|TBD&lt;br /&gt;
|[https://virtuozzo.com/support/server-lifecycle/ 7 years of support]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|[https://support.microsoft.com/en-us/lifecycle/search/default.aspx?alpha=hyper-v 11 years of support]]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Docker_inside_CT_vz7&amp;diff=19783</id>
		<title>Docker inside CT vz7</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Docker_inside_CT_vz7&amp;diff=19783"/>
		<updated>2016-07-18T20:54:43Z</updated>

		<summary type="html">&lt;p&gt;Finist: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Since Virtuozzo 7 kernel 3.10.0-327.18.2.vz7.14.7 it is possible to run Docker inside containers. This article describes how.&lt;br /&gt;
&amp;lt;br&amp;gt;'''This page is applicable for Virtuozzo 7''' (for OpenVZ 6 see [[Docker inside CT | '''here''']]).&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
* Kernel 3.10.0-327.18.2.vz7.14.7 or later version&lt;br /&gt;
* Kernel modules '''veth''' and '''overlay''' loaded on host&lt;br /&gt;
&lt;br /&gt;
'''Note:''' if you use kernel &amp;gt;= 3.10.0-327.18.2.vz7.14.25, you need to allow using &amp;quot;overlayfs&amp;quot; inside a Virtuozzo Container:&lt;br /&gt;
 echo 1 &amp;gt; /proc/sys/fs/experimental_fs_enable&lt;br /&gt;
This is a temporary step, it will be dropped once overlayfs is proved to be absolutely safe to run in any vz7 Container.&lt;br /&gt;
&lt;br /&gt;
== Container tuning ==&lt;br /&gt;
&lt;br /&gt;
* Turn on '''bridge''' feature to allow docker creating bridged network:&lt;br /&gt;
 vzctl set $veid --features bridge:on --save&lt;br /&gt;
* Setup Container veth-based network (Container must be '''veth'''-based, not '''venet'''-based):&lt;br /&gt;
 vzctl set $veid --netif_add eth0 --save&lt;br /&gt;
* Allow all iptables modules to be used in containers:&lt;br /&gt;
 vzctl set $veid --netfilter full --save&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Limitations ==&lt;br /&gt;
&lt;br /&gt;
* Only '''overlay''' and '''vfs''' Docker graph drivers are currently supported&lt;br /&gt;
* [[Checkpointing and live migration]] of a container with Docker containers inside is not supported yet (to be done)&lt;br /&gt;
&lt;br /&gt;
[[Category:HOWTO]]&lt;br /&gt;
[[Category: TRD]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Docker_inside_CT_vz7&amp;diff=19782</id>
		<title>Docker inside CT vz7</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Docker_inside_CT_vz7&amp;diff=19782"/>
		<updated>2016-07-18T20:51:52Z</updated>

		<summary type="html">&lt;p&gt;Finist: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Since Virtuozzo 7 kernel 3.10.0-327.18.2.vz7.14.7 it is possible to run Docker inside containers. This article describes how.&lt;br /&gt;
&amp;lt;br&amp;gt;'''This page is applicable for Virtuozzo 7''' (for OpenVZ 6 see [[Docker inside CT | '''here''']]).&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
* Kernel 3.10.0-327.18.2.vz7.14.7 or later version&lt;br /&gt;
* Kernel modules '''veth''' and '''overlay''' loaded on host&lt;br /&gt;
&lt;br /&gt;
Note: if you use kernel &amp;gt;= 3.10.0-327.18.2.vz7.14.25, you need to allow using &amp;quot;overlayfs&amp;quot; inside a Virtuozzo Container:&lt;br /&gt;
 echo 1 &amp;gt; /proc/sys/fs/experimental_fs_enable&lt;br /&gt;
&lt;br /&gt;
== Container tuning ==&lt;br /&gt;
&lt;br /&gt;
* Turn on '''bridge''' feature to allow docker creating bridged network:&lt;br /&gt;
 vzctl set $veid --features bridge:on --save&lt;br /&gt;
* Setup Container veth-based network (Container must be '''veth'''-based, not '''venet'''-based):&lt;br /&gt;
 vzctl set $veid --netif_add eth0 --save&lt;br /&gt;
* Allow all iptables modules to be used in containers:&lt;br /&gt;
 vzctl set $veid --netfilter full --save&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Limitations ==&lt;br /&gt;
&lt;br /&gt;
* Only '''overlay''' and '''vfs''' Docker graph drivers are currently supported&lt;br /&gt;
* [[Checkpointing and live migration]] of a container with Docker containers inside is not supported yet (to be done)&lt;br /&gt;
&lt;br /&gt;
[[Category:HOWTO]]&lt;br /&gt;
[[Category: TRD]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Docker_inside_CT&amp;diff=19781</id>
		<title>Docker inside CT</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Docker_inside_CT&amp;diff=19781"/>
		<updated>2016-07-18T20:51:04Z</updated>

		<summary type="html">&lt;p&gt;Finist: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Since OpenVZ kernel [[Download/kernel/rhel6-testing/042stab105.4|042stab105.4]] it is possible to run Docker inside containers. This article describes how.&lt;br /&gt;
&amp;lt;br&amp;gt;'''This page is applicable for OpenVZ 6''' (for Virtuozzo 7 see [[Docker inside CT vz7| '''here''']]).&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
* Kernel 042stab105.4 or later version&lt;br /&gt;
* Kernel modules '''tun''', '''veth''' and '''bridge''' loaded on host (not required since vzctl 4.9 as it loads it automatically)&lt;br /&gt;
&lt;br /&gt;
== Container creation and tuning ==&lt;br /&gt;
&lt;br /&gt;
* Create CentOS 7 container with enough disk space:&lt;br /&gt;
 vzctl create $veid --ostemplate centos-7-x86_64 --diskspace 20G&lt;br /&gt;
* Turn on bridge feature to allow docker creating bridged network:&lt;br /&gt;
 vzctl set $veid --features bridge:on --save&lt;br /&gt;
* Setup Container veth-based network:&lt;br /&gt;
 vzctl set $veid --netif_add eth0 --save&lt;br /&gt;
* Allow all iptables modules to be used in containers:&lt;br /&gt;
 vzctl set $veid --netfilter full --save&lt;br /&gt;
* Enable tun device access for container:&lt;br /&gt;
 vzctl set $veid --devnodes net/tun:rw --save&lt;br /&gt;
* Configure custom cgroups in systemd:&lt;br /&gt;
: &amp;lt;small&amp;gt;''systemd reads /proc/cgroups and mounts all cgroups enabled there, though it doesn't know there's a restriction that only freezer,devices and cpuacct,cpu,cpuset can be mounted in container, but not freezer, cpu etc. separately''&amp;lt;/small&amp;gt;&lt;br /&gt;
 vzctl mount $veid&lt;br /&gt;
 echo &amp;quot;JoinControllers=cpu,cpuacct,cpuset freezer,devices&amp;quot; &amp;gt;&amp;gt; /vz/root/$veid/etc/systemd/system.conf&lt;br /&gt;
* Start the container:&lt;br /&gt;
 vzctl start $veid&lt;br /&gt;
* If you use Debian Wheezy for your CT which does not support systemd, you can run:&lt;br /&gt;
 mount -t tmpfs tmpfs /sys/fs/cgroup&lt;br /&gt;
 mkdir /sys/fs/cgroup/freezer,devices&lt;br /&gt;
 mount -t cgroup cgroup /sys/fs/cgroup/freezer,devices -o freezer,devices&lt;br /&gt;
 mkdir /sys/fs/cgroup/cpu,cpuacct,cpuset&lt;br /&gt;
 mount -t cgroup cgroup /sys/fs/cgroup/cpu,cpuacct,cpuset/ -o cpu,cpuacct,cpuset&lt;br /&gt;
&lt;br /&gt;
== Prepare Docker in container == &lt;br /&gt;
&lt;br /&gt;
These steps are to be performed inside the container.&lt;br /&gt;
&lt;br /&gt;
* Install Docker:&lt;br /&gt;
 yum -y install docker-io&lt;br /&gt;
* Start docker daemon&lt;br /&gt;
 docker -d -s vfs&lt;br /&gt;
or change line in /etc/sysconfig/docker to:&lt;br /&gt;
 OPTIONS='--selinux-enabled -s vfs'&lt;br /&gt;
and&lt;br /&gt;
 service docker start&lt;br /&gt;
&lt;br /&gt;
== Example usage ==&lt;br /&gt;
&lt;br /&gt;
=== Wordpress ===&lt;br /&gt;
&lt;br /&gt;
Use Docker to start Wordpress (official, standard way).&lt;br /&gt;
&lt;br /&gt;
* Start mysql docker:&lt;br /&gt;
 docker run --name test-mysql -e MYSQL_ROOT_PASSWORD=123 -d mysql&lt;br /&gt;
* Start wordpress:&lt;br /&gt;
 docker run --name test-wordpress --link test-mysql:mysql -p 8080:80 -d wordpress&lt;br /&gt;
* Access wordpress server by container IP and port 8080: &amp;lt;pre&amp;gt;&amp;lt;nowiki&amp;gt;http://container_ip:8080&amp;lt;/nowiki&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Limitations ==&lt;br /&gt;
&lt;br /&gt;
* Only &amp;quot;vfs&amp;quot; Docker graph driver is currently supported&lt;br /&gt;
* [[Checkpointing and live migration]] of a container with Docker containers inside is not supported&lt;br /&gt;
* Bridges cannot be created inside Docker containers running inside OpenVZ container&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [http://www.youtube.com/watch?v=rh4oPpLtdYc Docker inside CT demo video].&lt;br /&gt;
&lt;br /&gt;
[[Category:HOWTO]]&lt;br /&gt;
[[Category: TRD]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Docker_inside_CT_vz7&amp;diff=19780</id>
		<title>Docker inside CT vz7</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Docker_inside_CT_vz7&amp;diff=19780"/>
		<updated>2016-07-18T20:49:16Z</updated>

		<summary type="html">&lt;p&gt;Finist: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Since Virtuozzo 7 kernel vzkernel-3.10.0-327.18.2.vz7.14.7 it is possible to run Docker inside containers. This article describes how.&lt;br /&gt;
&amp;lt;br&amp;gt;'''This page is applicable for Virtuozzo 7''' (for OpenVZ 6 see [[Docker inside CT | '''here''']]).&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
* Kernel 3.10.0-327.18.2.vz7.14.7 or later version&lt;br /&gt;
* Kernel modules '''veth''' and '''overlay''' loaded on host&lt;br /&gt;
&lt;br /&gt;
Note: if you use kernel &amp;gt;= 3.10.0-327.18.2.vz7.14.25, you need to allow using &amp;quot;overlayfs&amp;quot; inside a Virtuozzo Container:&lt;br /&gt;
 echo 1 &amp;gt; /proc/sys/fs/experimental_fs_enable&lt;br /&gt;
&lt;br /&gt;
== Container tuning ==&lt;br /&gt;
&lt;br /&gt;
* Turn on '''bridge''' feature to allow docker creating bridged network:&lt;br /&gt;
 vzctl set $veid --features bridge:on --save&lt;br /&gt;
* Setup Container veth-based network (Container must be '''veth'''-based, not '''venet'''-based):&lt;br /&gt;
 vzctl set $veid --netif_add eth0 --save&lt;br /&gt;
* Allow all iptables modules to be used in containers:&lt;br /&gt;
 vzctl set $veid --netfilter full --save&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Limitations ==&lt;br /&gt;
&lt;br /&gt;
* Only '''overlay''' and '''vfs''' Docker graph drivers are currently supported&lt;br /&gt;
* [[Checkpointing and live migration]] of a container with Docker containers inside is not supported yet (to be done)&lt;br /&gt;
&lt;br /&gt;
[[Category:HOWTO]]&lt;br /&gt;
[[Category: TRD]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Docker_inside_CT_vz7&amp;diff=19779</id>
		<title>Docker inside CT vz7</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Docker_inside_CT_vz7&amp;diff=19779"/>
		<updated>2016-07-18T16:36:28Z</updated>

		<summary type="html">&lt;p&gt;Finist: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Since Virtuozzo 7 kernel vzkernel-3.10.0-327.18.2.vz7.14.7 it is possible to run Docker inside containers. This article describes how.&lt;br /&gt;
&amp;lt;br&amp;gt;'''This page is applicable for Virtuozzo 7.'''&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
* Kernel 3.10.0-327.18.2.vz7.14.7 or later version&lt;br /&gt;
* Kernel modules '''veth''' and '''overlay''' loaded on host&lt;br /&gt;
&lt;br /&gt;
Note: if you use kernel &amp;gt;= 3.10.0-327.18.2.vz7.14.25, you need to allow using &amp;quot;overlayfs&amp;quot; inside a Virtuozzo Container:&lt;br /&gt;
 echo 1 &amp;gt; /proc/sys/fs/experimental_fs_enable&lt;br /&gt;
&lt;br /&gt;
== Container tuning ==&lt;br /&gt;
&lt;br /&gt;
* Turn on '''bridge''' feature to allow docker creating bridged network:&lt;br /&gt;
 vzctl set $veid --features bridge:on --save&lt;br /&gt;
* Setup Container veth-based network (Container must be '''veth'''-based, not '''venet'''-based):&lt;br /&gt;
 vzctl set $veid --netif_add eth0 --save&lt;br /&gt;
* Allow all iptables modules to be used in containers:&lt;br /&gt;
 vzctl set $veid --netfilter full --save&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Limitations ==&lt;br /&gt;
&lt;br /&gt;
* Only '''overlay''' and '''vfs''' Docker graph drivers are currently supported&lt;br /&gt;
* [[Checkpointing and live migration]] of a container with Docker containers inside is not supported yet (to be done)&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Docker inside CT]] page for running Docker Containers inside OpenVZ 6 Containers&lt;br /&gt;
&lt;br /&gt;
[[Category:HOWTO]]&lt;br /&gt;
[[Category: TRD]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Docker_inside_CT_vz7&amp;diff=19778</id>
		<title>Docker inside CT vz7</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Docker_inside_CT_vz7&amp;diff=19778"/>
		<updated>2016-07-18T14:53:00Z</updated>

		<summary type="html">&lt;p&gt;Finist: Created page with &amp;quot;Since Virtuozzo 7 kernel vzkernel-3.10.0-327.18.2.vz7.14.7 it is possible to run Docker inside containers. This article describes how. (This page is applicable for Virtuozzo 7...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Since Virtuozzo 7 kernel vzkernel-3.10.0-327.18.2.vz7.14.7 it is possible to run Docker inside containers. This article describes how.&lt;br /&gt;
(This page is applicable for Virtuozzo 7.)&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
* Kernel 3.10.0-327.18.2.vz7.14.7 or later version&lt;br /&gt;
* Kernel modules '''veth''' and '''overlay''' loaded on host&lt;br /&gt;
&lt;br /&gt;
Note: if you use kernel &amp;gt;= 3.10.0-327.18.2.vz7.14.25, you need to allow using &amp;quot;overlayfs&amp;quot; inside a Virtuozzo Container:&lt;br /&gt;
 echo 1 &amp;gt; /proc/sys/fs/experimental_fs_enable&lt;br /&gt;
&lt;br /&gt;
== Container tuning ==&lt;br /&gt;
&lt;br /&gt;
* Turn on bridge feature to allow docker creating bridged network:&lt;br /&gt;
 vzctl set $veid --features bridge:on --save&lt;br /&gt;
* Setup Container veth-based network (Container must be '''veth'''-based, not '''venet'''-based):&lt;br /&gt;
 vzctl set $veid --netif_add eth0 --save&lt;br /&gt;
* Allow all iptables modules to be used in containers:&lt;br /&gt;
 vzctl set $veid --netfilter full --save&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Limitations ==&lt;br /&gt;
&lt;br /&gt;
* Only '''overlay''' and '''vfs''' Docker graph drivers are currently supported&lt;br /&gt;
* [[Checkpointing and live migration]] of a container with Docker containers inside is not supported yet (to be done)&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Docker inside CT]] page for running Docker Containers inside OpenVZ 6 Containers&lt;br /&gt;
&lt;br /&gt;
[[Category:HOWTO]]&lt;br /&gt;
[[Category: TRD]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=File:Rhel7-kernel-plans.png&amp;diff=19480</id>
		<title>File:Rhel7-kernel-plans.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=File:Rhel7-kernel-plans.png&amp;diff=19480"/>
		<updated>2016-04-19T16:00:43Z</updated>

		<summary type="html">&lt;p&gt;Finist: Finist uploaded a new version of File:Rhel7-kernel-plans.png&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Download/kernel/rhel7-testing&amp;diff=19479</id>
		<title>Download/kernel/rhel7-testing</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Download/kernel/rhel7-testing&amp;diff=19479"/>
		<updated>2016-04-19T15:59:21Z</updated>

		<summary type="html">&lt;p&gt;Finist: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Virtuozzo}}&lt;br /&gt;
&lt;br /&gt;
We are currently porting the OpenVZ patchset to the RHEL7 kernel. Test kernel builds are available in [http://download.openvz.org/virtuozzo/releases/ our repositories].&lt;br /&gt;
&lt;br /&gt;
You can monitor the development progress by looking into [https://src.openvz.org/projects/OVZ/repos/vzkernel/commits RHEL7 source repository] and/or the [http://lists.openvz.org/pipermail/devel/ devel@ mailing list archives].&lt;br /&gt;
&lt;br /&gt;
== Plans ==&lt;br /&gt;
&lt;br /&gt;
[[Image:Rhel7-kernel-plans.png|1000px]]&lt;br /&gt;
&lt;br /&gt;
* [http://lists.openvz.org/pipermail/devel/2015-September/033081.html Virtuozzo 7 kernel branches and plans 20150908]&lt;br /&gt;
* [http://lists.openvz.org/pipermail/devel/2015-July/032692.html Virtuozzo 7 kernel branches and plans 20150731]&lt;br /&gt;
* [http://lists.openvz.org/pipermail/devel/2016-January/067684.html Virtuozzo 7 kernel branches and plans 20160119]&lt;br /&gt;
* [http://lists.openvz.org/pipermail/devel/2016-April/068402.html Virtuozzo 7 kernel branches and plans 20160419]&lt;br /&gt;
&lt;br /&gt;
== Contribute ==&lt;br /&gt;
&lt;br /&gt;
If you want to contribute to kernel development, see [[Kernel patches]] document.&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
&lt;br /&gt;
* RHEL7 based kernel git repo: https://src.openvz.org/projects/OVZ/repos/vzkernel/commits&lt;br /&gt;
* devel@ mailing list subscription: http://lists.openvz.org/mailman/listinfo/devel/&lt;br /&gt;
* devel@ mailing list archives: http://lists.openvz.org/pipermail/devel/&lt;br /&gt;
* [[Packages|Virtuozzo kernel in Linux distributions]]&lt;br /&gt;
&lt;br /&gt;
[[Category:Virtuozzo]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Kernel_TODO&amp;diff=19471</id>
		<title>Kernel TODO</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Kernel_TODO&amp;diff=19471"/>
		<updated>2016-04-12T10:50:35Z</updated>

		<summary type="html">&lt;p&gt;Finist: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;translate&amp;gt;&lt;br /&gt;
=== OpenVZ/Virtuozzo 7 kernel TODO list === &amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:2--&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! bug id&lt;br /&gt;
! task&lt;br /&gt;
! complexity&lt;br /&gt;
! potential/willing assignee&lt;br /&gt;
! comments&lt;br /&gt;
|-&lt;br /&gt;
|  || virtualize time inside a CT || medium ||  ||&lt;br /&gt;
|-&lt;br /&gt;
|  || virtualize AUDIT || hard ||  || it works on the host, make it working inside Containers as well&lt;br /&gt;
|-&lt;br /&gt;
| [https://bugs.openvz.org/browse/OVZ-5736 OVZ-5736] || ipset netfilter extension support || easy || || requested by Nick Knutov [https://lists.openvz.org/pipermail/users/2015-September/006547.html email link]&lt;br /&gt;
|-&lt;br /&gt;
| [https://bugs.openvz.org/browse/OVZ-2920 OVZ-2920] || fix GFS2 || ? || || initially it was reported for 2.6.32-x kernels, but makes sense to check on Virtuozzo 7 now&lt;br /&gt;
|-&lt;br /&gt;
| [https://bugs.openvz.org/browse/OVZ-6573 OVZ-6573] || immutable attr support || easy || || need to distinguish ploop and simfs and allow managing immutable attr inside a CT for ploop case only&lt;br /&gt;
|-&lt;br /&gt;
| [https://lists.openvz.org/pipermail/users/2015-November/006621.html email] || flashcache compilation || easy || || 2.6.32-x kernel only: flashcache 2.x compilation gets broken. Need to fix. Check flashcache 3.x compilation issues. Note: if you use Virtuozzo 7, use bcache, not flashcache.&lt;br /&gt;
|-&lt;br /&gt;
| [https://bugs.openvz.org/browse/OVZ-6659 OVZ-6659] || iptables ipt_owner module support inside a Container || medium || || Basic idea is trivial: rework existing attempt and apply. Next step: check the performance of the solution and rework if needed. And the last step: push to mainstream.&lt;br /&gt;
|-&lt;br /&gt;
| PSBM-45634 || add apparmor support inside a Container || medium || || &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/translate&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Contributions]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Building_external_kernel_modules&amp;diff=19203</id>
		<title>Building external kernel modules</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Building_external_kernel_modules&amp;diff=19203"/>
		<updated>2016-02-06T10:13:32Z</updated>

		<summary type="html">&lt;p&gt;Finist: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article describes how to build a kernel module which is not included into the stock Virtuozzo kernel.&amp;lt;br&amp;gt;&lt;br /&gt;
(This article applies to Virtuozzo 7)&lt;br /&gt;
&lt;br /&gt;
== Building a kernel module (*.ko) ==&lt;br /&gt;
&lt;br /&gt;
Here is an example how to build &amp;quot;via-rhine&amp;quot; kernel module which is in the Virtuozzo kernel source tree, but not enabled in kernel config by default.&lt;br /&gt;
&lt;br /&gt;
 // You need to install some dev packages in advance (the list here may be incomplete).&lt;br /&gt;
 '''# yum install rpm-build gcc xmlto asciidoc hmaccalc python-devel newt-devel pesign'''&lt;br /&gt;
&lt;br /&gt;
 // If you are going to build a kernel module against some kernel, you need kernel headers for that kernel.&lt;br /&gt;
 // Assume you want to build a kernel module against currently running kernel.&lt;br /&gt;
 '''# yum install vzkernel-devel.x86_64'''&lt;br /&gt;
&lt;br /&gt;
 // Get sources of the module you'd like to build,&lt;br /&gt;
 // in this particular example the easiest way i believe is just to download the kernel src.rpm.&lt;br /&gt;
 '''# cd /tmp'''&lt;br /&gt;
 '''# wget https://download.openvz.org/virtuozzo/factory/source/SRPMS/v/vzkernel-3.10.0-327.3.1.vz7.10.10.src.rpm'''&lt;br /&gt;
 '''# rpm -ihv vzkernel-3.10.0-327.3.1.vz7.10.10.src.rpm'''&lt;br /&gt;
 &lt;br /&gt;
 // &amp;quot;Prepare&amp;quot; source tree, it's not enough just to take the archive stored in it,&lt;br /&gt;
 // you need to apply additional patch(es), rpmbuild does this for us.&lt;br /&gt;
 '''# rpmbuild -bp /root/rpmbuild/SPECS/kernel.spec --nodeps'''&lt;br /&gt;
&lt;br /&gt;
 // Go to the module source directory.&lt;br /&gt;
 '''# cd /root/rpmbuild/BUILD/kernel-3.10.0-327.3.1.el7/linux-3.10.0-327.3.1.vz7.10.10/drivers/net/ethernet/via'''&lt;br /&gt;
&lt;br /&gt;
 // Edit the Makefile so you get the required kernel module compiled.&lt;br /&gt;
 // In this particular example the via-rhine compiles in-kernel by default, so we need to force it to be built as a module.&lt;br /&gt;
 '''# sed -ie 's/$(CONFIG_VIA_RHINE)/m/' Makefile'''&lt;br /&gt;
&lt;br /&gt;
 // Build and install the module.&lt;br /&gt;
 '''# make -C /lib/modules/`uname -r`/build M=$PWD'''&lt;br /&gt;
 '''# make -C /lib/modules/`uname -r`/build M=$PWD modules_install'''&lt;br /&gt;
&lt;br /&gt;
 // Check the module has been really copied and load it.&lt;br /&gt;
 '''# find /lib/modules -name \*rhine\*'''&lt;br /&gt;
 /lib/modules/3.10.0-327.3.1.vz7.10.10/extra/via-rhine.ko&lt;br /&gt;
 &lt;br /&gt;
 '''# modprobe via-rhine'''&lt;br /&gt;
 '''# lsmod |grep rhine'''&lt;br /&gt;
 via_rhine 32501 0&lt;br /&gt;
 mii 13934 1 via_rhine&lt;br /&gt;
&lt;br /&gt;
Here you are!&lt;br /&gt;
&lt;br /&gt;
{{Note|Your case is a bit more complicated? Read [https://www.kernel.org/doc/Documentation/kbuild/modules.txt Building External Modules]}}&lt;br /&gt;
&lt;br /&gt;
== Building a kernel module using Dynamic Kernel Module Support (DKMS) ==&lt;br /&gt;
TBD, you are welcome to put the description here. :)&lt;br /&gt;
&lt;br /&gt;
== Building a kernel module rpm package (kmod) ==&lt;br /&gt;
TBD, you are welcome to put the description here. :)&lt;br /&gt;
&lt;br /&gt;
--[[User:Finist|Finist]] ([[User talk:Finist|talk]]) 05:07, 6 February 2016 (EST)&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;br /&gt;
[[Category: Kernel]]&lt;br /&gt;
[[Category: Installation]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Building_external_kernel_modules&amp;diff=19202</id>
		<title>Building external kernel modules</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Building_external_kernel_modules&amp;diff=19202"/>
		<updated>2016-02-06T10:09:10Z</updated>

		<summary type="html">&lt;p&gt;Finist: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article describes how to build an kernel module which is not included into the stock Virtuozzo kernel.&amp;lt;br&amp;gt;&lt;br /&gt;
(This article applies to Virtuozzo 7)&lt;br /&gt;
&lt;br /&gt;
== Building a kernel module (*.ko) ==&lt;br /&gt;
&lt;br /&gt;
Here is an example how to build &amp;quot;via-rhine&amp;quot; kernel module which is in the Virtuozzo kernel source tree, but not enabled in kernel config by default.&lt;br /&gt;
&lt;br /&gt;
 // You need to install some dev packages in advance (the list here may be incomplete).&lt;br /&gt;
 '''# yum install rpm-build gcc xmlto asciidoc hmaccalc python-devel newt-devel pesign'''&lt;br /&gt;
&lt;br /&gt;
 // If you are going to build a kernel module against some kernel, you need kernel headers for that kernel.&lt;br /&gt;
 // Assume you want to build a kernel module against currently running kernel.&lt;br /&gt;
 '''# yum install vzkernel-devel.x86_64'''&lt;br /&gt;
&lt;br /&gt;
 // Get sources of the module you'd like to build,&lt;br /&gt;
 // in this particular example the easiest way i believe is just to download the kernel src.rpm.&lt;br /&gt;
 '''# cd /tmp'''&lt;br /&gt;
 '''# wget https://download.openvz.org/virtuozzo/factory/source/SRPMS/v/vzkernel-3.10.0-327.3.1.vz7.10.10.src.rpm'''&lt;br /&gt;
 '''# rpm -ihv vzkernel-3.10.0-327.3.1.vz7.10.10.src.rpm'''&lt;br /&gt;
 &lt;br /&gt;
 // &amp;quot;Prepare&amp;quot; source tree, it's not enough just to take the archive stored in it,&lt;br /&gt;
 // you need to apply additional patch(es), rpmbuild does this for us.&lt;br /&gt;
 '''# rpmbuild -bp /root/rpmbuild/SPECS/kernel.spec --nodeps'''&lt;br /&gt;
&lt;br /&gt;
 // Go to the module source directory.&lt;br /&gt;
 '''# cd /root/rpmbuild/BUILD/kernel-3.10.0-327.3.1.el7/linux-3.10.0-327.3.1.vz7.10.10/drivers/net/ethernet/via'''&lt;br /&gt;
&lt;br /&gt;
 // Edit the Makefile so you get the required kernel module compiled.&lt;br /&gt;
 // In this particular example the via-rhine compiles in-kernel by default, so we need to force it to be built as a module.&lt;br /&gt;
 '''# sed -ie 's/$(CONFIG_VIA_RHINE)/m/' Makefile'''&lt;br /&gt;
&lt;br /&gt;
 // Build and install the module.&lt;br /&gt;
 '''# make -C /lib/modules/`uname -r`/build M=$PWD'''&lt;br /&gt;
 '''# make -C /lib/modules/`uname -r`/build M=$PWD modules_install'''&lt;br /&gt;
&lt;br /&gt;
 // Check the module has been really copied and load it.&lt;br /&gt;
 '''# find /lib/modules -name \*rhine\*'''&lt;br /&gt;
 /lib/modules/3.10.0-327.3.1.vz7.10.10/extra/via-rhine.ko&lt;br /&gt;
 &lt;br /&gt;
 '''# modprobe via-rhine'''&lt;br /&gt;
 '''# lsmod |grep rhine'''&lt;br /&gt;
 via_rhine 32501 0&lt;br /&gt;
 mii 13934 1 via_rhine&lt;br /&gt;
&lt;br /&gt;
Here you are!&lt;br /&gt;
&lt;br /&gt;
{{Note|Your case is a bit more complicated? Read [https://www.kernel.org/doc/Documentation/kbuild/modules.txt Building External Modules]}}&lt;br /&gt;
&lt;br /&gt;
== Building a kernel module using Dynamic Kernel Module Support (DKMS) ==&lt;br /&gt;
TBD, you are welcome to put the description here. :)&lt;br /&gt;
&lt;br /&gt;
== Building a kernel module rpm package (kmod) ==&lt;br /&gt;
TBD, you are welcome to put the description here. :)&lt;br /&gt;
&lt;br /&gt;
--[[User:Finist|Finist]] ([[User talk:Finist|talk]]) 05:07, 6 February 2016 (EST)&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;br /&gt;
[[Category: Kernel]]&lt;br /&gt;
[[Category: Installation]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Building_external_kernel_modules&amp;diff=19201</id>
		<title>Building external kernel modules</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Building_external_kernel_modules&amp;diff=19201"/>
		<updated>2016-02-06T10:07:52Z</updated>

		<summary type="html">&lt;p&gt;Finist: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article describes how to build an kernel module which is not included into the stock Virtuozzo kernel.&lt;br /&gt;
(This article applies to Virtuozzo 7.)&lt;br /&gt;
&lt;br /&gt;
== Building a kernel module (*.ko) ==&lt;br /&gt;
&lt;br /&gt;
Here is an example how to build &amp;quot;via-rhine&amp;quot; kernel module which is in the Virtuozzo kernel source tree, but not enabled in kernel config by default.&lt;br /&gt;
&lt;br /&gt;
 // You need to install some dev packages in advance (the list here may be incomplete).&lt;br /&gt;
 '''# yum install rpm-build gcc xmlto asciidoc hmaccalc python-devel newt-devel pesign'''&lt;br /&gt;
&lt;br /&gt;
 // If you are going to build a kernel module against some kernel, you need kernel headers for that kernel.&lt;br /&gt;
 // Assume you want to build a kernel module against currently running kernel.&lt;br /&gt;
 '''# yum install vzkernel-devel.x86_64'''&lt;br /&gt;
&lt;br /&gt;
 // Get sources of the module you'd like to build,&lt;br /&gt;
 // in this particular example the easiest way i believe is just to download the kernel src.rpm.&lt;br /&gt;
 '''# cd /tmp'''&lt;br /&gt;
 '''# wget https://download.openvz.org/virtuozzo/factory/source/SRPMS/v/vzkernel-3.10.0-327.3.1.vz7.10.10.src.rpm'''&lt;br /&gt;
 '''# rpm -ihv vzkernel-3.10.0-327.3.1.vz7.10.10.src.rpm'''&lt;br /&gt;
 &lt;br /&gt;
 // &amp;quot;Prepare&amp;quot; source tree, it's not enough just to take the archive stored in it,&lt;br /&gt;
 // you need to apply additional patch(es), rpmbuild does this for us.&lt;br /&gt;
 '''# rpmbuild -bp /root/rpmbuild/SPECS/kernel.spec --nodeps'''&lt;br /&gt;
&lt;br /&gt;
 // Go to the module source directory.&lt;br /&gt;
 '''# cd /root/rpmbuild/BUILD/kernel-3.10.0-327.3.1.el7/linux-3.10.0-327.3.1.vz7.10.10/drivers/net/ethernet/via'''&lt;br /&gt;
&lt;br /&gt;
 // Edit the Makefile so you get the required kernel module compiled.&lt;br /&gt;
 // In this particular example the via-rhine compiles in-kernel by default, so we need to force it to be built as a module.&lt;br /&gt;
 '''# sed -ie 's/$(CONFIG_VIA_RHINE)/m/' Makefile'''&lt;br /&gt;
&lt;br /&gt;
 // Build and install the module.&lt;br /&gt;
 '''# make -C /lib/modules/`uname -r`/build M=$PWD'''&lt;br /&gt;
 '''# make -C /lib/modules/`uname -r`/build M=$PWD modules_install'''&lt;br /&gt;
&lt;br /&gt;
 // Check the module has been really copied and load it.&lt;br /&gt;
 '''# find /lib/modules -name \*rhine\*'''&lt;br /&gt;
 /lib/modules/3.10.0-327.3.1.vz7.10.10/extra/via-rhine.ko&lt;br /&gt;
 &lt;br /&gt;
 '''# modprobe via-rhine'''&lt;br /&gt;
 '''# lsmod |grep rhine'''&lt;br /&gt;
 via_rhine 32501 0&lt;br /&gt;
 mii 13934 1 via_rhine&lt;br /&gt;
&lt;br /&gt;
Here you are!&lt;br /&gt;
&lt;br /&gt;
{{Note|Your case is a bit more complicated? Read [https://www.kernel.org/doc/Documentation/kbuild/modules.txt Building External Modules]}}&lt;br /&gt;
&lt;br /&gt;
== Building a kernel module using Dynamic Kernel Module Support (DKMS) ==&lt;br /&gt;
TBD, you are welcome to put the description here. :)&lt;br /&gt;
&lt;br /&gt;
== Building a kernel module rpm package (kmod) ==&lt;br /&gt;
TBD, you are welcome to put the description here. :)&lt;br /&gt;
&lt;br /&gt;
--[[User:Finist|Finist]] ([[User talk:Finist|talk]]) 05:07, 6 February 2016 (EST)&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;br /&gt;
[[Category: Kernel]]&lt;br /&gt;
[[Category: Installation]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Building_external_kernel_modules&amp;diff=19200</id>
		<title>Building external kernel modules</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Building_external_kernel_modules&amp;diff=19200"/>
		<updated>2016-02-06T09:55:31Z</updated>

		<summary type="html">&lt;p&gt;Finist: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article describes how to build an kernel module which is not included into the stock Virtuozzo kernel.&lt;br /&gt;
(This article applies to Virtuozzo 7.)&lt;br /&gt;
&lt;br /&gt;
== Building a kernel module (*.ko) ==&lt;br /&gt;
&lt;br /&gt;
Here is an example how to build &amp;quot;via-rhine&amp;quot; kernel module which is in the Virtuozzo kernel source tree, but not enabled in kernel config by default.&lt;br /&gt;
&lt;br /&gt;
 // You need to install some dev packages in advance (the list here may be incomplete).&lt;br /&gt;
 '''# yum install rpm-build gcc xmlto asciidoc hmaccalc python-devel newt-devel pesign'''&lt;br /&gt;
&lt;br /&gt;
 // If you are going to build a kernel module against some kernel, you need kernel headers for that kernel.&lt;br /&gt;
 // Assume you want to build a kernel module against currently running kernel.&lt;br /&gt;
 '''# yum install vzkernel-devel.x86_64'''&lt;br /&gt;
&lt;br /&gt;
 // Get sources of the module you'd like to build,&lt;br /&gt;
 // in this particular example the easiest way i believe is just to download the kernel src.rpm.&lt;br /&gt;
 '''# cd /tmp'''&lt;br /&gt;
 '''# wget https://download.openvz.org/virtuozzo/factory/source/SRPMS/v/vzkernel-3.10.0-327.3.1.vz7.10.10.src.rpm'''&lt;br /&gt;
 '''# rpm -ihv vzkernel-3.10.0-327.3.1.vz7.10.10.src.rpm'''&lt;br /&gt;
 &lt;br /&gt;
 // &amp;quot;Prepare&amp;quot; source tree, it's not enough just to take the archive stored in it,&lt;br /&gt;
 // you need to apply additional patch(es), rpmbuild does this for us.&lt;br /&gt;
 '''# rpmbuild -bp /root/rpmbuild/SPECS/kernel.spec --nodeps'''&lt;br /&gt;
&lt;br /&gt;
 // Go to the module source directory.&lt;br /&gt;
 '''# cd /root/rpmbuild/BUILD/kernel-3.10.0-327.3.1.el7/linux-3.10.0-327.3.1.vz7.10.10/drivers/net/ethernet/via'''&lt;br /&gt;
&lt;br /&gt;
 // Edit the Makefile so you get the required kernel module compiled.&lt;br /&gt;
 // In this particular example the via-rhine compiles in-kernel by default, so we need to force it to be built as a module.&lt;br /&gt;
 '''# sed -ie 's/$(CONFIG_VIA_RHINE)/m/' Makefile'''&lt;br /&gt;
&lt;br /&gt;
 // Build and install the module.&lt;br /&gt;
 '''# make -C /lib/modules/`uname -r`/build M=$PWD'''&lt;br /&gt;
 '''# make -C /lib/modules/`uname -r`/build M=$PWD modules_install'''&lt;br /&gt;
&lt;br /&gt;
 // Check the module has been really copied and load it.&lt;br /&gt;
 '''# find /lib/modules -name \*rhine\*'''&lt;br /&gt;
 /lib/modules/3.10.0-327.3.1.vz7.10.10/extra/via-rhine.ko&lt;br /&gt;
 &lt;br /&gt;
 '''# modprobe via-rhine'''&lt;br /&gt;
 '''# lsmod |grep rhine'''&lt;br /&gt;
 via_rhine 32501 0&lt;br /&gt;
 mii 13934 1 via_rhine&lt;br /&gt;
&lt;br /&gt;
Here you are!&lt;br /&gt;
&lt;br /&gt;
{{Note|Your case is a bit more complicated? Read [https://www.kernel.org/doc/Documentation/kbuild/modules.txt Building External Modules]}}&lt;br /&gt;
&lt;br /&gt;
== Building a kernel module using Dynamic Kernel Module Support (DKMS) ==&lt;br /&gt;
TBD, you are welcome to put the description here. :)&lt;br /&gt;
&lt;br /&gt;
== Building a kernel module rpm package (kmod) ==&lt;br /&gt;
TBD, you are welcome to put the description here. :)&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;br /&gt;
[[Category: Kernel]]&lt;br /&gt;
[[Category: Installation]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Building_external_kernel_modules&amp;diff=19199</id>
		<title>Building external kernel modules</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Building_external_kernel_modules&amp;diff=19199"/>
		<updated>2016-02-06T09:54:46Z</updated>

		<summary type="html">&lt;p&gt;Finist: Created page with &amp;quot;This article describes how to build an kernel module which is not included into the stock Virtuozzo kernel. (This article applies to Virtuozzo 7.)  == Building a kernel module...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article describes how to build an kernel module which is not included into the stock Virtuozzo kernel.&lt;br /&gt;
(This article applies to Virtuozzo 7.)&lt;br /&gt;
&lt;br /&gt;
== Building a kernel module (*.ko) ==&lt;br /&gt;
&lt;br /&gt;
Here is an example how to build &amp;quot;via-rhine&amp;quot; kernel module which is in the Virtuozzo kernel source tree, but not enabled in kernel config by default.&lt;br /&gt;
&lt;br /&gt;
 // You need to install some dev packages in advance (the list here may be incomplete).&lt;br /&gt;
 '''# yum install rpm-build gcc xmlto asciidoc hmaccalc python-devel newt-devel pesign'''&lt;br /&gt;
&lt;br /&gt;
 // If you are going to build a kernel module against some kernel, you need kernel headers for that kernel.&lt;br /&gt;
 // Assume you want to build a kernel module against currently running kernel.&lt;br /&gt;
 '''# yum install vzkernel-devel.x86_64'''&lt;br /&gt;
&lt;br /&gt;
 // Get sources of the module you'd like to build,&lt;br /&gt;
 // in this particular example the easiest way i believe is just to download the kernel src.rpm.&lt;br /&gt;
 '''# cd /tmp'''&lt;br /&gt;
 '''# wget https://download.openvz.org/virtuozzo/factory/source/SRPMS/v/vzkernel-3.10.0-327.3.1.vz7.10.10.src.rpm'''&lt;br /&gt;
 '''# rpm -ihv vzkernel-3.10.0-327.3.1.vz7.10.10.src.rpm'''&lt;br /&gt;
 &lt;br /&gt;
 // &amp;quot;Prepare&amp;quot; source tree, it's not enough just to take the archive stored in it,&lt;br /&gt;
 // you need to apply additional patch(es), rpmbuild does this for us.&lt;br /&gt;
 '''# rpmbuild -bp /root/rpmbuild/SPECS/kernel.spec --nodeps'''&lt;br /&gt;
&lt;br /&gt;
 // Go to the module source directory.&lt;br /&gt;
 '''# cd /root/rpmbuild/BUILD/kernel-3.10.0-327.3.1.el7/linux-3.10.0-327.3.1.vz7.10.10/drivers/net/ethernet/via'''&lt;br /&gt;
&lt;br /&gt;
 // Edit the Makefile so you get the required kernel module compiled.&lt;br /&gt;
 // In this particular example the via-rhine compiles in-kernel by default, so we need to force it to be built as a module.&lt;br /&gt;
 '''# sed -ie 's/$(CONFIG_VIA_RHINE)/m/' Makefile'''&lt;br /&gt;
&lt;br /&gt;
 // Build and install the module.&lt;br /&gt;
 '''# make -C /lib/modules/`uname -r`/build M=$PWD'''&lt;br /&gt;
 '''# make -C /lib/modules/`uname -r`/build M=$PWD modules_install'''&lt;br /&gt;
&lt;br /&gt;
 // Check the module has been really copied and load it.&lt;br /&gt;
 '''# find /lib/modules -name \*rhine\*'''&lt;br /&gt;
 /lib/modules/3.10.0-327.3.1.vz7.10.10/extra/via-rhine.ko&lt;br /&gt;
 &lt;br /&gt;
 '''# modprobe via-rhine'''&lt;br /&gt;
 '''# lsmod |grep rhine'''&lt;br /&gt;
 via_rhine 32501 0&lt;br /&gt;
 mii 13934 1 via_rhine&lt;br /&gt;
&lt;br /&gt;
Here you are!&lt;br /&gt;
&lt;br /&gt;
{{Note|Your case is a bit more complicated? Read [https://www.kernel.org/doc/Documentation/kbuild/modules.txt Building External Modules]}}&lt;br /&gt;
&lt;br /&gt;
== Building a kernel module using Dynamic Kernel Module Support (DKMS) ==&lt;br /&gt;
TBD, you are welcome to put the description here. :)&lt;br /&gt;
&lt;br /&gt;
== Building a kernel module rpm package (kmod) ==&lt;br /&gt;
TBD, you are welcome to put the description here. :)&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Kernel_TODO&amp;diff=19190</id>
		<title>Kernel TODO</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Kernel_TODO&amp;diff=19190"/>
		<updated>2016-02-04T16:34:44Z</updated>

		<summary type="html">&lt;p&gt;Finist: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;translate&amp;gt;&lt;br /&gt;
=== OpenVZ/Virtuozzo 7 kernel TODO list === &amp;lt;!--T:1--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--T:2--&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! bug id&lt;br /&gt;
! task&lt;br /&gt;
! complexity&lt;br /&gt;
! potential/willing assignee&lt;br /&gt;
! comments&lt;br /&gt;
|-&lt;br /&gt;
|  || virtualize time inside a CT || medium ||  ||&lt;br /&gt;
|-&lt;br /&gt;
|  || virtualize AUDIT || hard ||  || it works on the host, make it working inside Containers as well&lt;br /&gt;
|-&lt;br /&gt;
| [https://bugs.openvz.org/browse/OVZ-5736 OVZ-5736] || ipset netfilter extension support || easy || || requested by Nick Knutov [https://lists.openvz.org/pipermail/users/2015-September/006547.html email link]&lt;br /&gt;
|-&lt;br /&gt;
| [https://bugs.openvz.org/browse/OVZ-2920 OVZ-2920] || fix GFS2 || ? || || initially it was reported for 2.6.32-x kernels, but makes sense to check on Virtuozzo 7 now&lt;br /&gt;
|-&lt;br /&gt;
| [https://bugs.openvz.org/browse/OVZ-6573 OVZ-6573] || immutable attr support || easy || || need to distinguish ploop and simfs and allow managing immutable attr inside a CT for ploop case only&lt;br /&gt;
|-&lt;br /&gt;
| [https://lists.openvz.org/pipermail/users/2015-November/006621.html email] || flashcache compilation || easy || || 2.6.32-x kernel only: flashcache 2.x compilation gets broken. Need to fix. Check flashcache 3.x compilation issues. Note: if you use Virtuozzo 7, use bcache, not flashcache.&lt;br /&gt;
|-&lt;br /&gt;
| [https://bugs.openvz.org/browse/OVZ-6659 OVZ-6659] || iptables ipt_owner module support inside a Container || medium || || Basic idea is trivial: rework existing attempt and apply. Next step: check the performance of the solution and rework if needed. And the last step: push to mainstream.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/translate&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Contributions]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Download/kernel/rhel7-testing&amp;diff=19153</id>
		<title>Download/kernel/rhel7-testing</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Download/kernel/rhel7-testing&amp;diff=19153"/>
		<updated>2016-01-19T10:41:13Z</updated>

		<summary type="html">&lt;p&gt;Finist: /* Plans */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Virtuozzo}}&lt;br /&gt;
&lt;br /&gt;
We are currently porting the OpenVZ patchset to the RHEL7 kernel. Test kernel builds are available in [http://download.openvz.org/virtuozzo/releases/ our repositories].&lt;br /&gt;
&lt;br /&gt;
You can monitor the development progress by looking into [https://src.openvz.org/projects/OVZ/repos/vzkernel/commits RHEL7 source repository] and/or the [http://lists.openvz.org/pipermail/devel/ devel@ mailing list archives].&lt;br /&gt;
&lt;br /&gt;
== Plans ==&lt;br /&gt;
&lt;br /&gt;
[[Image:Rhel7-kernel-plans.png|1000px]]&lt;br /&gt;
&lt;br /&gt;
* [http://lists.openvz.org/pipermail/devel/2015-September/033081.html Virtuozzo 7 kernel branches and plans 20150908]&lt;br /&gt;
* [http://lists.openvz.org/pipermail/devel/2015-July/032692.html Virtuozzo 7 kernel branches and plans 20150731]&lt;br /&gt;
* [http://lists.openvz.org/pipermail/devel/2016-January/067684.html Virtuozzo 7 kernel branches and plans 20160119]&lt;br /&gt;
&lt;br /&gt;
== Contribute ==&lt;br /&gt;
&lt;br /&gt;
If you want to contribute to kernel development, see [[Kernel patches]] document.&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
&lt;br /&gt;
* RHEL7 based kernel git repo: https://src.openvz.org/projects/OVZ/repos/vzkernel/commits&lt;br /&gt;
* devel@ mailing list subscription: http://lists.openvz.org/mailman/listinfo/devel/&lt;br /&gt;
* devel@ mailing list archives: http://lists.openvz.org/pipermail/devel/&lt;br /&gt;
* [[Packages|Virtuozzo kernel in Linux distributions]]&lt;br /&gt;
&lt;br /&gt;
[[Category:Virtuozzo]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=File:Rhel7-kernel-plans.png&amp;diff=19152</id>
		<title>File:Rhel7-kernel-plans.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=File:Rhel7-kernel-plans.png&amp;diff=19152"/>
		<updated>2016-01-19T10:28:52Z</updated>

		<summary type="html">&lt;p&gt;Finist: Finist uploaded a new version of File:Rhel7-kernel-plans.png&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Kernel_TODO&amp;diff=18011</id>
		<title>Kernel TODO</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Kernel_TODO&amp;diff=18011"/>
		<updated>2015-11-16T12:00:04Z</updated>

		<summary type="html">&lt;p&gt;Finist: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== OpenVZ/Virtuozzo 7 kernel TODO list ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! bug id&lt;br /&gt;
! task&lt;br /&gt;
! complexity&lt;br /&gt;
! potential/willing assignee&lt;br /&gt;
! comments&lt;br /&gt;
|-&lt;br /&gt;
|  || virtualize time inside a CT || medium ||  ||&lt;br /&gt;
|-&lt;br /&gt;
|  || virtualize AUDIT || hard ||  || it works on the host, make it working inside Containers as well&lt;br /&gt;
|-&lt;br /&gt;
| [https://bugs.openvz.org/browse/OVZ-5736 OVZ-5736] || ipset netfilter extension support || easy || || requested by Nick Knutov [https://lists.openvz.org/pipermail/users/2015-September/006547.html email link]&lt;br /&gt;
|-&lt;br /&gt;
| [https://bugs.openvz.org/browse/OVZ-2920 OVZ-2920] || fix GFS2 || ? || || initially it was reported for 2.6.32-x kernels, but makes sense to check on Virtuozzo 7 now&lt;br /&gt;
|-&lt;br /&gt;
| [https://bugs.openvz.org/browse/OVZ-6573 OVZ-6573] || immutable attr support || easy || || need to distinguish ploop and simfs and allow managing immutable attr inside a CT for ploop case only&lt;br /&gt;
|-&lt;br /&gt;
| [https://lists.openvz.org/pipermail/users/2015-November/006621.html email] || flashcache compilation || easy || || 2.6.32-x kernel only: flashcache 2.x compilation gets broken. Need to fix. Check flashcache 3.x compilation issues. Note: if you use Virtuozzo 7, use bcache, not flashcache.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
[[Category:Contributions]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Quick_installation&amp;diff=18007</id>
		<title>Quick installation</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Quick_installation&amp;diff=18007"/>
		<updated>2015-11-12T12:14:07Z</updated>

		<summary type="html">&lt;p&gt;Finist: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Virtuozzo}}&lt;br /&gt;
&lt;br /&gt;
{{Note|See [[Quick installation]] if you are looking to install the current stable version of OpenVZ.}}&lt;br /&gt;
&lt;br /&gt;
This document briefly describes the steps needed to install Virtuozzo Linux distribution on your machine.&lt;br /&gt;
&lt;br /&gt;
There are a few ways to install Virtuozzo:&lt;br /&gt;
&lt;br /&gt;
=== Bare-metal installation ===&lt;br /&gt;
&lt;br /&gt;
OpenVZ project builds its own Linux distribution with both hypervisor and container virtualization.&lt;br /&gt;
It is based on [https://www.cloudlinux.com/ CloudLinux] distribution, with the additions of [[Download/kernel/rhel7-testing|our custom kernel]], OpenVZ management utilities, [[QEMU]] and Virtuozzo installer. It is highly recommended to use OpenVZ containers and virtual machines with this Virtuozzo installation image. See [[Virtuozzo]].&lt;br /&gt;
[http://download.openvz.org/virtuozzo/releases/7.0-beta2/x86_64/iso/ Download] installation ISO image.&lt;br /&gt;
&lt;br /&gt;
=== Using Virtuozzo in the Vagrant box ===&lt;br /&gt;
&lt;br /&gt;
[https://www.vagrantup.com/ Vagrant] is a tool for creating reproducible and portable development environments.&lt;br /&gt;
It is easy to run environment with Virtuozzo using Vagrant:&lt;br /&gt;
&lt;br /&gt;
* Download and [https://docs.vagrantup.com/v2/installation/ install Vagrant]&lt;br /&gt;
* Download and install [https://www.virtualbox.org/wiki/Downloads Virtualbox], VMware Fusion or VMware Workstation&lt;br /&gt;
* Download [https://atlas.hashicorp.com/OpenVZ/boxes/Virtuozzo-7b2 Virtuozzo box]:&lt;br /&gt;
&lt;br /&gt;
   $ vagrant init OpenVZ/Virtuozzo-7b2&lt;br /&gt;
&lt;br /&gt;
* Run box:&lt;br /&gt;
&lt;br /&gt;
   $ vagrant up --provider virtualbox&lt;br /&gt;
&lt;br /&gt;
and in case of VMware hypervisor:&lt;br /&gt;
&lt;br /&gt;
   $ vagrant up --provider vmware_desktop&lt;br /&gt;
&lt;br /&gt;
* Attach to console:&lt;br /&gt;
&lt;br /&gt;
   $ vagrant ssh&lt;br /&gt;
&lt;br /&gt;
* Use ''openvz/openvz'' to login inside box&lt;br /&gt;
&lt;br /&gt;
=== Using Virtuozzo in the Amazon EC2 ===&lt;br /&gt;
&lt;br /&gt;
Follow steps in [[Using Virtuozzo in the Amazon EC2]].&lt;br /&gt;
&lt;br /&gt;
=== Setup on pre-installed Linux distribution ===&lt;br /&gt;
&lt;br /&gt;
{{Note|Pay attention, this installation method currently blocked by broken network after installation - {{OVZ|6454}}.}}&lt;br /&gt;
&lt;br /&gt;
Alternatively, one can install OpenVZ on a pre-installed RPM based Linux distribution.&lt;br /&gt;
Supported Linux distributions: Cloud Linux 7.*, CentOS 7.*, Scientific Linux 7.* etc&lt;br /&gt;
&lt;br /&gt;
Follow step-by-step instruction below:&lt;br /&gt;
&lt;br /&gt;
Package ''virtuozzo-release'' will bring meta information and YUM repositories:&lt;br /&gt;
&lt;br /&gt;
   # yum localinstall http://download.openvz.org/virtuozzo/releases/7.0/x86_64/os/Packages/v/virtuozzo-release-7.0.0-10.vz7.x86_64.rpm&lt;br /&gt;
&lt;br /&gt;
Then install mandatory Virtuozzo RPM packages:&lt;br /&gt;
&lt;br /&gt;
   # yum install -y prlctl prl-disp-service vzkernel&lt;br /&gt;
&lt;br /&gt;
See OpenVZ [[Packages]] available in various Linux distributions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== OpenVZ with upstream Linux kernel ===&lt;br /&gt;
&lt;br /&gt;
See article [[OpenVZ with upstream kernel]] if you want more details about support of upstream kernel.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Using Virtuozzo ==&lt;br /&gt;
&lt;br /&gt;
Page with [[screencasts]] shows demo with a few Virtuozzo commands. Feel free to add more.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
* [https://docs.openvz.org/ Official Virtuozzo documentation]&lt;br /&gt;
&lt;br /&gt;
[[Category: Installation]]&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Kernel_TODO&amp;diff=17999</id>
		<title>Kernel TODO</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Kernel_TODO&amp;diff=17999"/>
		<updated>2015-11-11T12:50:22Z</updated>

		<summary type="html">&lt;p&gt;Finist: /* OpenVZ/Virtuozzo 7 kernel TODO list */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== OpenVZ/Virtuozzo 7 kernel TODO list ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! bug id&lt;br /&gt;
! task&lt;br /&gt;
! complexity&lt;br /&gt;
! potential/willing assignee&lt;br /&gt;
! comments&lt;br /&gt;
|-&lt;br /&gt;
|  || virtualize time inside a CT || medium ||  ||&lt;br /&gt;
|-&lt;br /&gt;
|  || virtualize AUDIT || hard ||  || it works on the host, make it working inside Containers as well&lt;br /&gt;
|-&lt;br /&gt;
| [https://bugs.openvz.org/browse/OVZ-5736 OVZ-5736] || ipset netfilter extension support || easy || || requested by Nick Knutov [https://lists.openvz.org/pipermail/users/2015-September/006547.html email link]&lt;br /&gt;
|-&lt;br /&gt;
| [https://bugs.openvz.org/browse/OVZ-2920 OVZ-2920] || fix GFS2 || ? || || initially it was reported for 2.6.32-x kernels, but makes sense to check on Virtuozzo 7 now&lt;br /&gt;
|-&lt;br /&gt;
| [https://bugs.openvz.org/browse/OVZ-6573 OVZ-6573] || immutable attr support || easy || || need to distinguish ploop and simfs and allow managing immutable attr inside a CT for ploop case only&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Wishlist&amp;diff=17908</id>
		<title>Wishlist</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Wishlist&amp;diff=17908"/>
		<updated>2015-10-29T16:05:01Z</updated>

		<summary type="html">&lt;p&gt;Finist: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Stub}}&lt;br /&gt;
&lt;br /&gt;
If you have ideas and suggestions on new features and improvements that you would like to see and help bring to OpenVZ, you can:&lt;br /&gt;
&lt;br /&gt;
Write a specification - a short description what feature or improvement you would like to implement and why and how it should be implemented. Writing a good specification is an art, the finer points of which are discussed here.&lt;br /&gt;
Once you have written your specification, you will need to discuss it with OpenVZ developers for inclusion in OpenVZ.&lt;br /&gt;
&lt;br /&gt;
List below contains our current plans and TODO:&lt;br /&gt;
&lt;br /&gt;
== Per project ==&lt;br /&gt;
&lt;br /&gt;
* [[LibCT]]&lt;br /&gt;
* [http://criu.org/Todo CRIU (Checkpoint and Restore in Userspace)]&lt;br /&gt;
* [[QEMU]]&lt;br /&gt;
* [[LibVirt]]&lt;br /&gt;
* [[Kernel_TODO]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Per activity ==&lt;br /&gt;
&lt;br /&gt;
=== Development ===&lt;br /&gt;
&lt;br /&gt;
* ZFS support (from community wishlist, see [http://openvz.livejournal.com/51718.html survey results]) - {{OVZ|6534}}&lt;br /&gt;
* Good Web UI (from community wishlist, see [http://openvz.livejournal.com/51718.html survey results]) - {{OVZ|6537}}&lt;br /&gt;
* Upstream kernel support and availability in Linux distributives (from community wishlist, see [http://openvz.livejournal.com/51718.html survey results]) - {{OVZ|6535}}&lt;br /&gt;
* Support of Debian 8.0 Jessie (from community wishlist, see [http://openvz.livejournal.com/51718.html survey results]) - {{OVZ|6536}}&lt;br /&gt;
* Setup regular [[static code analysis]] for OpenVZ components (Travis CI?)&lt;br /&gt;
* Integrate projects with [https://code.google.com/p/address-sanitizer/ Address Sanitizer]&lt;br /&gt;
&lt;br /&gt;
=== Software testing ===&lt;br /&gt;
&lt;br /&gt;
* See things to test in [[QA TODO list]]&lt;br /&gt;
* Add more unit tests to [https://src.openvz.org/projects/OVZ/repos/prl-disp-service/browse/Tests Parallels Dispatcher] code&lt;br /&gt;
&lt;br /&gt;
=== DevOps ===&lt;br /&gt;
&lt;br /&gt;
* Create packages of [https://src.openvz.org/projects/OVZ/repos/libprlsdk/browse libprlsdk] and [https://src.openvz.org/projects/OVZ/repos/prlctl/browse prlctl] for different Linux distributions. See [[Packages]].&lt;br /&gt;
* Automate [[Virtuozzo Storage]] installation. Something like [https://github.com/ceph/ceph-ansible Ansible playbook for CEPH]&lt;br /&gt;
* [https://github.com/ligurio/ansible-criu-environment Automate] CRIU development [http://criu.org/Installation environment]&lt;br /&gt;
* Automate CRIU [http://criu.org/cov/ code coverage] measuring process&lt;br /&gt;
* Automate [https://github.com/ligurio/openvz-playbooks OpenVZ infrastructure] (ask sergeyb@). [https://infrastructure.fedoraproject.org/cgit/ansible.git/ Something] like Fedora has.&lt;br /&gt;
&lt;br /&gt;
=== Design ===&lt;br /&gt;
&lt;br /&gt;
* [[Design tasks]]&lt;br /&gt;
* [[T-Shirt ideas]]&lt;br /&gt;
&lt;br /&gt;
[[Category:Contributions]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Kernel_TODO&amp;diff=17907</id>
		<title>Kernel TODO</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Kernel_TODO&amp;diff=17907"/>
		<updated>2015-10-29T15:59:14Z</updated>

		<summary type="html">&lt;p&gt;Finist: Created page with &amp;quot;=== OpenVZ/Virtuozzo 7 kernel TODO list ===  {| class=&amp;quot;wikitable sortable&amp;quot; |- ! bug id ! task ! complexity ! potential/willing assignee ! comments |- |  || virtualize time ins...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== OpenVZ/Virtuozzo 7 kernel TODO list ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! bug id&lt;br /&gt;
! task&lt;br /&gt;
! complexity&lt;br /&gt;
! potential/willing assignee&lt;br /&gt;
! comments&lt;br /&gt;
|-&lt;br /&gt;
|  || virtualize time inside a CT || medium ||  ||&lt;br /&gt;
|-&lt;br /&gt;
|  || virtualize AUDIT || hard ||  || it works on the host, make it working inside Containers as well&lt;br /&gt;
|-&lt;br /&gt;
| [https://bugs.openvz.org/browse/OVZ-5736 OVZ-5736] || ipset netfilter extension support || easy || || requested by Nick Knutov [https://lists.openvz.org/pipermail/users/2015-September/006547.html email link]&lt;br /&gt;
|-&lt;br /&gt;
| [https://bugs.openvz.org/browse/OVZ-2920 OVZ-2920] || fix GFS2 || ? || || initially it was reported for 2.6.32-x kernels, but makes sense to check on Virtuozzo 7 now&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Docker_inside_CT&amp;diff=15786</id>
		<title>Docker inside CT</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Docker_inside_CT&amp;diff=15786"/>
		<updated>2015-02-12T09:00:36Z</updated>

		<summary type="html">&lt;p&gt;Finist: /* Limitations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Since OpenVZ kernel [[Download/kernel/rhel6-testing/042stab105.4|042stab105.4]] it is possible to run Docker inside containers. This article describes how.&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
* Kernel 042stab105.4 or later version&lt;br /&gt;
* Kernel module veth module is loaded on host&lt;br /&gt;
&lt;br /&gt;
== Container tuning ==&lt;br /&gt;
&lt;br /&gt;
* Create Fedora 20 container:&lt;br /&gt;
 vzctl create $veid --ostemplate fedora-20-x86_64&lt;br /&gt;
* Turn on bridge feature to allow docker creating bridged network:&lt;br /&gt;
 vzctl set $veid --features bridge:on --save&lt;br /&gt;
* Setup Container veth-based network:&lt;br /&gt;
 vzctl set $veid --netif_add eth0 --save&lt;br /&gt;
* Allow all iptables modules to be used in containers:&lt;br /&gt;
 vzctl set $veid --netfilter full --save&lt;br /&gt;
* Configure custom cgroups in systemd:&lt;br /&gt;
: &amp;lt;small&amp;gt;''systemd reads /proc/cgroups and mounts all cgroups enabled there, though it doesn't know there's a restriction that only freezer,devices and cpuacct,cpu,cpuset can be mounted in container, but not freezer, cpu etc. separately''&amp;lt;/small&amp;gt;&lt;br /&gt;
 vzctl mount $veid&lt;br /&gt;
 echo &amp;quot;JoinControllers=cpu,cpuacct,cpuset freezer,devices&amp;quot; &amp;gt;&amp;gt; /vz/root/$veid/etc/systemd/system.conf &lt;br /&gt;
* Start the container:&lt;br /&gt;
 vzctl start $veid&lt;br /&gt;
&lt;br /&gt;
== Prepare Docker in container == &lt;br /&gt;
&lt;br /&gt;
These steps are to be performed inside the container.&lt;br /&gt;
&lt;br /&gt;
* Install Docker:&lt;br /&gt;
 yum -y install docker-io&lt;br /&gt;
* Start docker daemon&lt;br /&gt;
 docker -d -s vfs&lt;br /&gt;
&lt;br /&gt;
== Example usage ==&lt;br /&gt;
&lt;br /&gt;
=== Wordpress ===&lt;br /&gt;
Use Docker to start Wordpress (official, standard way).&lt;br /&gt;
&lt;br /&gt;
* Start mysql docker:&lt;br /&gt;
 docker run --name test-mysql -e MYSQL_ROOT_PASSWORD=123 -d mysql&lt;br /&gt;
* Start wordpress:&lt;br /&gt;
 docker run --name test-wordpress --link test-mysql:mysql -p 8080:80 -d wordpress&lt;br /&gt;
* Access wordpress server by container IP and port 8080: &amp;lt;pre&amp;gt;&amp;lt;nowiki&amp;gt;http://container_ip:8080&amp;lt;/nowiki&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Limitations ==&lt;br /&gt;
&lt;br /&gt;
* This feature is currently in beta&lt;br /&gt;
* Only &amp;quot;vfs&amp;quot; Docker graph driver is currently supported&lt;br /&gt;
* Online migration of a Container with Docker Containers inside is not supported&lt;br /&gt;
&lt;br /&gt;
[[Category:HOWTO]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=IPsec&amp;diff=15697</id>
		<title>IPsec</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=IPsec&amp;diff=15697"/>
		<updated>2014-12-25T13:26:27Z</updated>

		<summary type="html">&lt;p&gt;Finist: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;For IPsec to work inside a container:&lt;br /&gt;
* Kernel 042stab084.8 or later&lt;br /&gt;
* The following kernel modules must be loaded before container start:&lt;br /&gt;
: &amp;lt;code&amp;gt;af_key esp4 esp6 xfrm4_mode_tunnel xfrm6_mode_tunnel&amp;lt;/code&amp;gt;&lt;br /&gt;
* Capability &amp;lt;code&amp;gt;net_admin&amp;lt;/code&amp;gt; must be granted to a container&lt;br /&gt;
&lt;br /&gt;
Tested with libreswan.&lt;br /&gt;
&lt;br /&gt;
Limitations:&lt;br /&gt;
* online migration on a Container with IPsec inside - does not work&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;br /&gt;
[[Category: Networking]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Using_private_IPs_for_Hardware_Nodes&amp;diff=4125</id>
		<title>Using private IPs for Hardware Nodes</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Using_private_IPs_for_Hardware_Nodes&amp;diff=4125"/>
		<updated>2008-02-11T04:54:17Z</updated>

		<summary type="html">&lt;p&gt;Finist: Undo revision 4122 by SergeyIvanov (Talk)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article describes how to assign public IPs to VEs running on OVZ Hardware Nodes in case you have a following network topology:&lt;br /&gt;
&lt;br /&gt;
[[Image:PrivateIPs_fig1.gif|An initial network topology]]&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
This configuration was tested on a RHEL5 OpenVZ Hardware Node and a VE based on a Fedora Core 5 template.&lt;br /&gt;
Other host OSs and templates might require some configuration changes, please add corresponding OS specific changes if you've faced any.&lt;br /&gt;
&lt;br /&gt;
This article assumes the presence of 'brctl', 'ip' and 'ifconfig' utils. You may need to install missing packages like 'bridge-utils'/'iproute'/'net-tools' or others which contain those utilities.&lt;br /&gt;
&lt;br /&gt;
This article assumes you have already [[Quick installation|installed OpenVZ]], prepared the [[OS template cache]](s) and have [[Basic_operations_in_OpenVZ_environment|VE(s) created]]. If not, follow the links to perform the steps needed.&lt;br /&gt;
{{Note|don't assign an IP after VE creation.}}&lt;br /&gt;
&lt;br /&gt;
== An OVZ Hardware Node has the only one Ethernet interface ==&lt;br /&gt;
(assume eth0)&lt;br /&gt;
&lt;br /&gt;
=== Hardware Node configuration ===&lt;br /&gt;
&lt;br /&gt;
==== Create a bridge device ====&lt;br /&gt;
 [HN]# brctl addbr br0&lt;br /&gt;
&lt;br /&gt;
==== Remove an IP from eth0 interface ====&lt;br /&gt;
 [HN]# ifconfig eth0 0&lt;br /&gt;
&lt;br /&gt;
==== Add eth0 interface into the bridge ====&lt;br /&gt;
 [HN]# brctl addif br0 eth0&lt;br /&gt;
 &lt;br /&gt;
==== Assign the IP to the bridge ====&lt;br /&gt;
(the same that was assigned on eth0 earlier)&lt;br /&gt;
 [HN]# ifconfig br0 10.0.0.2/24&lt;br /&gt;
&lt;br /&gt;
==== Resurrect the default routing ====&lt;br /&gt;
 [HN]# ip route add default via 10.0.0.1 dev br0&lt;br /&gt;
 &lt;br /&gt;
{{Warning|if you are '''configuring''' the node '''remotely''' you '''must''' prepare a '''script''' with the above commands and run it in background with the redirected output or you'll '''lose the access''' to the Node.}}&lt;br /&gt;
&lt;br /&gt;
==== A script example ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[HN]# cat /tmp/br_add &lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
brctl addbr br0&lt;br /&gt;
ifconfig eth0 0 &lt;br /&gt;
brctl addif br0 eth0 &lt;br /&gt;
ifconfig br0 10.0.0.2/24 &lt;br /&gt;
ip route add default via 10.0.0.1 dev br0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 [HN]# /tmp/br_add &amp;gt;/dev/null 2&amp;gt;&amp;amp;1 &amp;amp;&lt;br /&gt;
&lt;br /&gt;
=== VE configuration ===&lt;br /&gt;
&lt;br /&gt;
==== Start a VE ====&lt;br /&gt;
 [HN]# vzctl start 101&lt;br /&gt;
&lt;br /&gt;
==== Add a [[Virtual_Ethernet_device|veth interface]] to the VE ====&lt;br /&gt;
 [HN]# vzctl set 101 --netif_add eth0 --save&lt;br /&gt;
&lt;br /&gt;
==== Set up an IP to the newly created VE's veth interface ====&lt;br /&gt;
 [HN]# vzctl exec 101 ifconfig eth0 85.86.87.195/26&lt;br /&gt;
 &lt;br /&gt;
==== Add the VE's veth interface to the bridge ====&lt;br /&gt;
 [HN]# brctl addif br0 veth101.0&lt;br /&gt;
&lt;br /&gt;
{{Note|There will be a delay of about 15 seconds(default for 2.6.18 kernel) while the bridge software runs STP to detect loops and transitions the veth interface to the forwarding state.&lt;br /&gt;
&amp;lt;!-- /sys/class/net/$BR_NAME/bridge/forward_delay in SEC*USER_HZ --&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
==== Set up the default route for the VE ====&lt;br /&gt;
 [HN]# vzctl exec 101 ip route add default via 85.86.87.193 dev eth0&lt;br /&gt;
 &lt;br /&gt;
==== (Optional) Add VE↔HN routes ====&lt;br /&gt;
The above configuration provides the following connections:&lt;br /&gt;
* VE X ↔ VE Y (where VE X and VE Y can locate on any OVZ HN)&lt;br /&gt;
* VE   ↔ Internet&lt;br /&gt;
&lt;br /&gt;
Note that&lt;br /&gt;
&lt;br /&gt;
* The accessability of the VE from the HN depends on the local gateway providing NAT(probably - yes)&lt;br /&gt;
&lt;br /&gt;
* The accessability of the HN from the VE depends on the ISP gateway being aware of the local network(probably not)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
So to provide VE ↔ HN accessibility despite the gateways' configuration you can add the following routes:&lt;br /&gt;
&lt;br /&gt;
 [HN]# ip route add 85.86.87.195 dev br0&lt;br /&gt;
 [HN]# vzctl exec 101 ip route add 10.0.0.2 dev eth0&lt;br /&gt;
&lt;br /&gt;
=== Resulting OpenVZ Node configuration ===&lt;br /&gt;
[[Image:PrivateIPs_fig2.gif|Resulting OpenVZ Node configuration]]&lt;br /&gt;
&lt;br /&gt;
=== Making the configuration persistent ===&lt;br /&gt;
&lt;br /&gt;
==== Set up a bridge on a HN ====&lt;br /&gt;
This can be done by configuring the &amp;lt;code&amp;gt;ifcfg-*&amp;lt;/code&amp;gt; files located in &amp;lt;code&amp;gt;/etc/sysconfig/network-scripts/&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Assuming you had a configuration file (e.g. &amp;lt;code&amp;gt;ifcfg-eth0&amp;lt;/code&amp;gt;) like:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
DEVICE=eth0&lt;br /&gt;
ONBOOT=yes&lt;br /&gt;
IPADDR=10.0.0.2&lt;br /&gt;
NETMASK=255.255.255.0&lt;br /&gt;
GATEWAY=10.0.0.1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To automatically create bridge &amp;lt;code&amp;gt;br0&amp;lt;/code&amp;gt;  you can create &amp;lt;code&amp;gt;ifcfg-br0&amp;lt;/code&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
DEVICE=br0&lt;br /&gt;
TYPE=Bridge&lt;br /&gt;
ONBOOT=yes&lt;br /&gt;
IPADDR=10.0.0.2&lt;br /&gt;
NETMASK=255.255.255.0&lt;br /&gt;
GATEWAY=10.0.0.1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and edit &amp;lt;code&amp;gt;ifcfg-eth0&amp;lt;/code&amp;gt; to add the &amp;lt;code&amp;gt;eth0&amp;lt;/code&amp;gt; interface into the bridge &amp;lt;code&amp;gt;br0&amp;lt;/code&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
DEVICE=eth0&lt;br /&gt;
ONBOOT=yes&lt;br /&gt;
BRIDGE=br0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Edit the VE's configuration ====&lt;br /&gt;
Add these parameters to the &amp;lt;code&amp;gt;/etc/vz/conf/$VEID.conf&amp;lt;/code&amp;gt; file which will be used during the network configuration:&lt;br /&gt;
* Add/change &amp;lt;code&amp;gt;CONFIG_CUSTOMIZED=&amp;quot;yes&amp;quot;&amp;lt;/code&amp;gt; (indicates that a custom script should be run on a VE start)&lt;br /&gt;
* Add &amp;lt;code&amp;gt;VETH_IP_ADDRESS=&amp;quot;VE IP/MASK&amp;quot;&amp;lt;/code&amp;gt; (a VE can have multiple IPs separated by spaces)&lt;br /&gt;
* Add &amp;lt;code&amp;gt;VE_DEFAULT_GATEWAY=&amp;quot;VE DEFAULT GATEWAY&amp;quot;&amp;lt;/code&amp;gt;&lt;br /&gt;
* Add &amp;lt;code&amp;gt;BRIDGEDEV=&amp;quot;BRIDGE NAME&amp;quot;&amp;lt;/code&amp;gt; (a bridge name to which the VE veth interface should be added)&lt;br /&gt;
&lt;br /&gt;
An example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Network customization section&lt;br /&gt;
CONFIG_CUSTOMIZED=&amp;quot;yes&amp;quot;&lt;br /&gt;
VETH_IP_ADDRESS=&amp;quot;85.86.87.195/26&amp;quot;&lt;br /&gt;
VE_DEFAULT_GATEWAY=&amp;quot;85.86.87.193&amp;quot;&lt;br /&gt;
BRIDGEDEV=&amp;quot;br0&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Create a custom network configuration script ====&lt;br /&gt;
which should be called each time a VE is started (e.g. &amp;lt;code&amp;gt;/usr/sbin/vznetcfg.custom&amp;lt;/code&amp;gt;):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# /usr/sbin/vznetcfg.custom&lt;br /&gt;
# a script to bring up bridged network interfaces (veth's) in a VE&lt;br /&gt;
&lt;br /&gt;
GLOBALCONFIGFILE=/etc/vz/vz.conf&lt;br /&gt;
VECONFIGFILE=/etc/vz/conf/$VEID.conf&lt;br /&gt;
vzctl=/usr/sbin/vzctl&lt;br /&gt;
brctl=/usr/sbin/brctl&lt;br /&gt;
ip=/sbin/ip&lt;br /&gt;
ifconfig=/sbin/ifconfig&lt;br /&gt;
. $GLOBALCONFIGFILE&lt;br /&gt;
. $VECONFIGFILE&lt;br /&gt;
&lt;br /&gt;
NETIF_OPTIONS=`echo $NETIF | sed 's/,/\n/g'`&lt;br /&gt;
for str in $NETIF_OPTIONS; do \&lt;br /&gt;
        # getting 'ifname' parameter value&lt;br /&gt;
        if [[ &amp;quot;$str&amp;quot; =~ &amp;quot;^ifname=&amp;quot; ]]; then&lt;br /&gt;
                # remove the parameter name from the string (along with '=')&lt;br /&gt;
                VEIFNAME=${str#*=};&lt;br /&gt;
        fi&lt;br /&gt;
        # getting 'host_ifname' parameter value&lt;br /&gt;
        if [[ &amp;quot;$str&amp;quot; =~ &amp;quot;^host_ifname=&amp;quot; ]]; then&lt;br /&gt;
                # remove the parameter name from the string (along with '=')&lt;br /&gt;
                VZHOSTIF=${str#*=};&lt;br /&gt;
        fi&lt;br /&gt;
done&lt;br /&gt;
&lt;br /&gt;
if [ ! -n &amp;quot;$VETH_IP_ADDRESS&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;According to $CONFIGFILE VE$VEID has no veth IPs configured.&amp;quot;&lt;br /&gt;
   exit 1&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
if [ ! -n &amp;quot;$VZHOSTIF&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;According to $CONFIGFILE VE$VEID has no veth interface configured.&amp;quot;&lt;br /&gt;
   exit 1&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
if [ ! -n &amp;quot;$VEIFNAME&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;Corrupted $CONFIGFILE: no 'ifname' defined for host_ifname $VZHOSTIF.&amp;quot;&lt;br /&gt;
   exit 1&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Initializing interface $VZHOSTIF for VE$VEID.&amp;quot;&lt;br /&gt;
$ifconfig $VZHOSTIF 0&lt;br /&gt;
&lt;br /&gt;
VEROUTEDEV=$VZHOSTIF&lt;br /&gt;
&lt;br /&gt;
if [ -n &amp;quot;$BRIDGEDEV&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;Adding interface $VZHOSTIF to the bridge $BRIDGEDEV.&amp;quot;&lt;br /&gt;
   VEROUTEDEV=$BRIDGEDEV&lt;br /&gt;
   $brctl addif $BRIDGEDEV $VZHOSTIF&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
# Up the interface $VEIFNAME link in VE$VEID&lt;br /&gt;
$vzctl exec $VEID $ip link set $VEIFNAME up&lt;br /&gt;
&lt;br /&gt;
for IP in $VETH_IP_ADDRESS; do&lt;br /&gt;
   echo &amp;quot;Adding an IP $IP to the $VEIFNAME for VE$VEID.&amp;quot;&lt;br /&gt;
   $vzctl exec $VEID $ip address add $IP dev $VEIFNAME&lt;br /&gt;
&lt;br /&gt;
   # removing the netmask&lt;br /&gt;
   IP_STRIP=${IP%%/*};&lt;br /&gt;
&lt;br /&gt;
   echo &amp;quot;Adding a route from VE0 to VE$VEID.&amp;quot;&lt;br /&gt;
   $ip route add $IP_STRIP dev $VEROUTEDEV&lt;br /&gt;
done&lt;br /&gt;
&lt;br /&gt;
if [ -n &amp;quot;$VE0_IP&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;Adding a route from VE$VEID to VE0.&amp;quot;&lt;br /&gt;
   $vzctl exec $VEID $ip route add $VE0_IP dev $VEIFNAME&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
if [ -n &amp;quot;$VE_DEFAULT_GATEWAY&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;Setting $VE_DEFAULT_GATEWAY as a default gateway for VE$VEID.&amp;quot;&lt;br /&gt;
   $vzctl exec $VEID \&lt;br /&gt;
        $ip route add default via $VE_DEFAULT_GATEWAY dev $VEIFNAME&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
exit 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Make the script to be run on a VE start ====&lt;br /&gt;
In order to run above script on a VE start create the file &amp;lt;code&amp;gt;/etc/vz/vznet.conf&amp;lt;/code&amp;gt; with the following contents:&lt;br /&gt;
&lt;br /&gt;
 EXTERNAL_SCRIPT=&amp;quot;/usr/sbin/vznetcfg.custom&amp;quot;&lt;br /&gt;
&lt;br /&gt;
{{Note|&amp;lt;code&amp;gt;/usr/sbin/vznetcfg.custom&amp;lt;/code&amp;gt; should be executable.(chmod +x /usr/sbin/vznetcfg.custom)}}&lt;br /&gt;
&lt;br /&gt;
==== Setting the route VE → HN ====&lt;br /&gt;
To set up a route from the VE to the HN, the custom script has to get a HN IP (the $VE0_IP variable in the script). There are several ways to specify it:&lt;br /&gt;
&lt;br /&gt;
# Add an entry VE0_IP=&amp;quot;VE0 IP&amp;quot; to the &amp;lt;code&amp;gt;$VEID.conf&amp;lt;/code&amp;gt;&lt;br /&gt;
# Add an entry VE0_IP=&amp;quot;VE0 IP&amp;quot; to the &amp;lt;code&amp;gt;/etc/vz/vz.conf&amp;lt;/code&amp;gt; (the global configuration config file)&lt;br /&gt;
# Implement some smart algorithm to determine the VE0 IP right in the custom network configuration script&lt;br /&gt;
&lt;br /&gt;
Each variant has its pros and cons, nevertheless for HN static IP configuration variant 2 seems to be acceptable (and the most simple).&lt;br /&gt;
&lt;br /&gt;
== An OpenVZ Hardware Node has two Ethernet interfaces ==&lt;br /&gt;
Assuming you have 2 interfaces eth0 and eth1 and want to separate local traffic (10.0.0.0/24) from  external traffic.&lt;br /&gt;
Let's assign eth0 for the external traffic and eth1 for the local one.&lt;br /&gt;
&lt;br /&gt;
If there is no need to make the VE accessible from the HN and vice versa, it's enough to replace 'br0' with 'eth1' in the following steps of the above configuration:&lt;br /&gt;
* Hardware Node configuration → [[Using_private_IPs_for_Hardware_Nodes#Assign_the_IP_to_the_bridge|Assign the IP to the bridge]]&lt;br /&gt;
* Hardware Node configuration → [[Using_private_IPs_for_Hardware_Nodes#Resurrect_the_default_routing|Resurrect the default routing]]&lt;br /&gt;
&lt;br /&gt;
It is nesessary to set a local IP for 'br0' to ensure VE ↔ HN connection availability.&lt;br /&gt;
&lt;br /&gt;
== Putting VEs to different subnetworks ==&lt;br /&gt;
It's enough to set up the correct $VETH_IP_ADDRESS and $VE_DEFAULT_GATEWAY values in the &lt;br /&gt;
[[Using_private_IPs_for_Hardware_Nodes#Edit_the_VE.27s_configuration|above configuration]].&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Virtual network device]]&lt;br /&gt;
* [[Differences between venet and veth]]&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;br /&gt;
[[Category: Networking]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=User:SergeyIvanov&amp;diff=4124</id>
		<title>User:SergeyIvanov</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=User:SergeyIvanov&amp;diff=4124"/>
		<updated>2008-02-11T04:52:36Z</updated>

		<summary type="html">&lt;p&gt;Finist: New page: ==Using private IPs for Hardware Nodes changes?== Sergey, could you please explain the suggested changes for this page? Why adding another bridge b...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==[[Using_private_IPs_for_Hardware_Nodes|Using private IPs for Hardware Nodes]] changes?==&lt;br /&gt;
Sergey, could you please explain the suggested changes for this page? Why adding another bridge br1 is necessary? And if even &amp;quot;yes&amp;quot; - it should be created somewhere but no command for creation br1 in the change? Please correct me if I'm wrong here, but so far I have rolled back your addition to [[Using_private_IPs_for_Hardware_Nodes|Using private IPs for Hardware Nodes]].&amp;lt;br&amp;gt;--[[User:Finist|Finist]] 07:54, 11 February 2008 (MSK)&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=User:Finist&amp;diff=4123</id>
		<title>User:Finist</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=User:Finist&amp;diff=4123"/>
		<updated>2008-02-11T04:34:15Z</updated>

		<summary type="html">&lt;p&gt;Finist: New page: Konstantin Khorenko &amp;lt;br&amp;gt; khorenko &amp;quot;at&amp;quot; openvz &amp;quot;.&amp;quot; org&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Konstantin Khorenko&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
khorenko &amp;quot;at&amp;quot; openvz &amp;quot;.&amp;quot; org&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Asterisk_from_source&amp;diff=3946</id>
		<title>Asterisk from source</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Asterisk_from_source&amp;diff=3946"/>
		<updated>2008-01-14T12:18:33Z</updated>

		<summary type="html">&lt;p&gt;Finist: Asterisk tuning moved to Asterisk from source: obvious&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==General==&lt;br /&gt;
Asterisk is free and open source code to create software PBX server. See [http://www.asterisk.org] for details. This package perfectly runs inside OpenVZ container. Some users run up to 60 containers with Asterisk deployed for production per single hardware node. Although the easiest way to install Asterisk into container is to use pre-build package from Linux distribution, occasionally one may need to have a possibility to build it from source tarball available on developer's site.&lt;br /&gt;
&lt;br /&gt;
In order to do it the following remarks are worth reading:&lt;br /&gt;
&lt;br /&gt;
==Building Asterisk in CT==&lt;br /&gt;
Asterisk PBX server itself is compiled out of the shelf in CT provided that develop application template is installed on this CT. The functionality of the resulting executable is enough to support simple VoIP telephony.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
wget http://downloads.digium.com/pub/asterisk/releases/asterisk-x.x.xx.tar.gz&lt;br /&gt;
tar xzf asterisk-x.x.xx.tar.gz&lt;br /&gt;
cd asterisk-x.x.xx.tar.gz&lt;br /&gt;
./configure&lt;br /&gt;
make&lt;br /&gt;
make install&lt;br /&gt;
make samples&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The last command is not needed in case you have your own configuration as it installs some sample configuration files.&lt;br /&gt;
To configure Asterisk itself see for example [http://www.digium.com/elqNow/elqRedir.htm?ref=http://downloads.oreilly.com/books/9780596510480.pdf].&lt;br /&gt;
&lt;br /&gt;
==MeetMe problem==&lt;br /&gt;
Unfortunately, one particular module called MeetMe (conferencing tool) will be switched off from compilation. This happens due to external dependency on 'zaptel' package. Zaptel provides support for some hardware cards for FXO/FXS analog telephony marketed by Digium (the company behind Asterisk), and on top of that supplies so called ztdummy kernel module. Ztdummy works like a simple metronome which is required to synchronize multiple sound streams in case of conference call. &lt;br /&gt;
&lt;br /&gt;
If you do not plan to use analog telephone lines, hence don't like in install the hardware, nothing is lost provided you run your HN with 2.6.XX kernel. You just need to play a little trick with Asterisk make system: download zaptel tarball from the same location as Asterisk itself, and copy it's header zaptel.h to location /usr/include/zaptel/zaptel.h in CT where you plan to build Asterisk. This tweaks MeetMe for installation.&lt;br /&gt;
&lt;br /&gt;
==HN configuration==&lt;br /&gt;
Finally you need to make sure that on HN ztdummy kernel module is loaded and the access to /dev/zap/pseudo device file is granted to&lt;br /&gt;
CT:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
modprobe ztdummy&lt;br /&gt;
vzctl set 240 --devnodes zap/pseudo:rw --save&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:HOWTO]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Using_private_IPs_for_Hardware_Nodes&amp;diff=3688</id>
		<title>Using private IPs for Hardware Nodes</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Using_private_IPs_for_Hardware_Nodes&amp;diff=3688"/>
		<updated>2007-11-30T17:03:49Z</updated>

		<summary type="html">&lt;p&gt;Finist: venet -&amp;gt; veth misprint&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article describes how to assign public IPs to VEs running on OVZ Hardware Nodes in case you have a following network topology:&lt;br /&gt;
&lt;br /&gt;
[[Image:PrivateIPs_fig1.gif|An initial network topology]]&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
This configuration was tested on a RHEL5 OpenVZ Hardware Node and a VE based on a Fedora Core 5 template.&lt;br /&gt;
Other host OSs and templates might require some configuration changes, please add corresponding OS specific changes if you've faced any.&lt;br /&gt;
&lt;br /&gt;
This article assumes the presence of 'brctl', 'ip' and 'ifconfig' utils. You may need to install missing packages like 'bridge-utils'/'iproute'/'net-tools' or others which contain those utilities.&lt;br /&gt;
&lt;br /&gt;
This article assumes you have already [[Quick installation|installed OpenVZ]], prepared the [[OS template cache]](s) and have [[Basic_operations_in_OpenVZ_environment|VE(s) created]]. If not, follow the links to perform the steps needed.&lt;br /&gt;
{{Note|don't assign an IP after VE creation.}}&lt;br /&gt;
&lt;br /&gt;
== An OVZ Hardware Node has the only one Ethernet interface ==&lt;br /&gt;
(assume eth0)&lt;br /&gt;
&lt;br /&gt;
=== Hardware Node configuration ===&lt;br /&gt;
&lt;br /&gt;
==== Create a bridge device ====&lt;br /&gt;
 [HN]# brctl addbr br0&lt;br /&gt;
&lt;br /&gt;
==== Remove an IP from eth0 interface ====&lt;br /&gt;
 [HN]# ifconfig eth0 0&lt;br /&gt;
&lt;br /&gt;
==== Add eth0 interface into the bridge ====&lt;br /&gt;
 [HN]# brctl addif br0 eth0&lt;br /&gt;
 &lt;br /&gt;
==== Assign the IP to the bridge ====&lt;br /&gt;
(the same that was assigned on eth0 earlier)&lt;br /&gt;
 [HN]# ifconfig br0 10.0.0.2/24&lt;br /&gt;
&lt;br /&gt;
==== Resurrect the default routing ====&lt;br /&gt;
 [HN]# ip route add default via 10.0.0.1 dev br0&lt;br /&gt;
 &lt;br /&gt;
{{Warning|if you are '''configuring''' the node '''remotely''' you '''must''' prepare a '''script''' with the above commands and run it in background with the redirected output or you'll '''lose the access''' to the Node.}}&lt;br /&gt;
&lt;br /&gt;
==== A script example ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[HN]# cat /tmp/br_add &lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
brctl addbr br0&lt;br /&gt;
ifconfig eth0 0 &lt;br /&gt;
brctl addif br0 eth0 &lt;br /&gt;
ifconfig br0 10.0.0.2/24 &lt;br /&gt;
ip route add default via 10.0.0.1 dev br0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 [HN]# /tmp/br_add &amp;gt;/dev/null 2&amp;gt;&amp;amp;1 &amp;amp;&lt;br /&gt;
&lt;br /&gt;
=== VE configuration ===&lt;br /&gt;
&lt;br /&gt;
==== Start a VE ====&lt;br /&gt;
 [HN]# vzctl start 101&lt;br /&gt;
&lt;br /&gt;
==== Add a [[Virtual_Ethernet_device|veth interface]] to the VE ====&lt;br /&gt;
 [HN]# vzctl set 101 --netif_add eth0 --save&lt;br /&gt;
&lt;br /&gt;
==== Set up an IP to the newly created VE's veth interface ====&lt;br /&gt;
 [HN]# vzctl exec 101 ifconfig eth0 85.86.87.195/26&lt;br /&gt;
 &lt;br /&gt;
==== Add the VE's veth interface to the bridge ====&lt;br /&gt;
 [HN]# brctl addif br0 veth101.0&lt;br /&gt;
&lt;br /&gt;
{{Note|There will be a delay of about 15 seconds(default for 2.6.18 kernel) while the bridge software runs STP to detect loops and transitions the veth interface to the forwarding state.&lt;br /&gt;
&amp;lt;!-- /sys/class/net/$BR_NAME/bridge/forward_delay in SEC*USER_HZ --&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
==== Set up the default route for the VE ====&lt;br /&gt;
 [HN]# vzctl exec 101 ip route add default via 85.86.87.193 dev eth0&lt;br /&gt;
 &lt;br /&gt;
==== (Optional) Add VE↔HN routes ====&lt;br /&gt;
The above configuration provides the following connections:&lt;br /&gt;
* VE X ↔ VE Y (where VE X and VE Y can locate on any OVZ HN)&lt;br /&gt;
* VE   ↔ Internet&lt;br /&gt;
&lt;br /&gt;
Note that&lt;br /&gt;
&lt;br /&gt;
* The accessability of the VE from the HN depends on the local gateway providing NAT(probably - yes)&lt;br /&gt;
&lt;br /&gt;
* The accessability of the HN from the VE depends on the ISP gateway being aware of the local network(probably not)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
So to provide VE ↔ HN accessibility despite the gateways' configuration you can add the following routes:&lt;br /&gt;
&lt;br /&gt;
 [HN]# ip route add 85.86.87.195 dev br0&lt;br /&gt;
 [HN]# vzctl exec 101 ip route add 10.0.0.2 dev eth0&lt;br /&gt;
&lt;br /&gt;
=== Resulting OpenVZ Node configuration ===&lt;br /&gt;
[[Image:PrivateIPs_fig2.gif|Resulting OpenVZ Node configuration]]&lt;br /&gt;
&lt;br /&gt;
=== Making the configuration persistent ===&lt;br /&gt;
&lt;br /&gt;
==== Set up a bridge on a HN ====&lt;br /&gt;
This can be done by configuring the &amp;lt;code&amp;gt;ifcfg-*&amp;lt;/code&amp;gt; files located in &amp;lt;code&amp;gt;/etc/sysconfig/network-scripts/&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Assuming you had a configuration file (e.g. &amp;lt;code&amp;gt;ifcfg-eth0&amp;lt;/code&amp;gt;) like:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
DEVICE=eth0&lt;br /&gt;
ONBOOT=yes&lt;br /&gt;
IPADDR=10.0.0.2&lt;br /&gt;
NETMASK=255.255.255.0&lt;br /&gt;
GATEWAY=10.0.0.1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To automatically create bridge &amp;lt;code&amp;gt;br0&amp;lt;/code&amp;gt;  you can create &amp;lt;code&amp;gt;ifcfg-br0&amp;lt;/code&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
DEVICE=br0&lt;br /&gt;
TYPE=Bridge&lt;br /&gt;
ONBOOT=yes&lt;br /&gt;
IPADDR=10.0.0.2&lt;br /&gt;
NETMASK=255.255.255.0&lt;br /&gt;
GATEWAY=10.0.0.1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and edit &amp;lt;code&amp;gt;ifcfg-eth0&amp;lt;/code&amp;gt; to add the &amp;lt;code&amp;gt;eth0&amp;lt;/code&amp;gt; interface into the bridge &amp;lt;code&amp;gt;br0&amp;lt;/code&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
DEVICE=eth0&lt;br /&gt;
ONBOOT=yes&lt;br /&gt;
BRIDGE=br0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Edit the VE's configuration ====&lt;br /&gt;
Add these parameters to the &amp;lt;code&amp;gt;/etc/vz/conf/$VEID.conf&amp;lt;/code&amp;gt; file which will be used during the network configuration:&lt;br /&gt;
* Add/change &amp;lt;code&amp;gt;CONFIG_CUSTOMIZED=&amp;quot;yes&amp;quot;&amp;lt;/code&amp;gt; (indicates that a custom script should be run on a VE start)&lt;br /&gt;
* Add &amp;lt;code&amp;gt;VETH_IP_ADDRESS=&amp;quot;VE IP/MASK&amp;quot;&amp;lt;/code&amp;gt; (a VE can have multiple IPs separated by spaces)&lt;br /&gt;
* Add &amp;lt;code&amp;gt;VE_DEFAULT_GATEWAY=&amp;quot;VE DEFAULT GATEWAY&amp;quot;&amp;lt;/code&amp;gt;&lt;br /&gt;
* Add &amp;lt;code&amp;gt;BRIDGEDEV=&amp;quot;BRIDGE NAME&amp;quot;&amp;lt;/code&amp;gt; (a bridge name to which the VE veth interface should be added)&lt;br /&gt;
&lt;br /&gt;
An example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Network customization section&lt;br /&gt;
CONFIG_CUSTOMIZED=&amp;quot;yes&amp;quot;&lt;br /&gt;
VETH_IP_ADDRESS=&amp;quot;85.86.87.195/26&amp;quot;&lt;br /&gt;
VE_DEFAULT_GATEWAY=&amp;quot;85.86.87.193&amp;quot;&lt;br /&gt;
BRIDGEDEV=&amp;quot;br0&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Create a custom network configuration script ====&lt;br /&gt;
which should be called each time a VE is started (e.g. &amp;lt;code&amp;gt;/usr/sbin/vznetcfg.custom&amp;lt;/code&amp;gt;):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# /usr/sbin/vznetcfg.custom&lt;br /&gt;
# a script to bring up bridged network interfaces (veth's) in a VE&lt;br /&gt;
&lt;br /&gt;
GLOBALCONFIGFILE=/etc/vz/vz.conf&lt;br /&gt;
VECONFIGFILE=/etc/vz/conf/$VEID.conf&lt;br /&gt;
vzctl=/usr/sbin/vzctl&lt;br /&gt;
brctl=/usr/sbin/brctl&lt;br /&gt;
ip=/sbin/ip&lt;br /&gt;
ifconfig=/sbin/ifconfig&lt;br /&gt;
. $GLOBALCONFIGFILE&lt;br /&gt;
. $VECONFIGFILE&lt;br /&gt;
&lt;br /&gt;
NETIF_OPTIONS=`echo $NETIF | sed 's/,/\n/g'`&lt;br /&gt;
for str in $NETIF_OPTIONS; do \&lt;br /&gt;
        # getting 'ifname' parameter value&lt;br /&gt;
        if [[ &amp;quot;$str&amp;quot; =~ &amp;quot;^ifname=&amp;quot; ]]; then&lt;br /&gt;
                # remove the parameter name from the string (along with '=')&lt;br /&gt;
                VEIFNAME=${str#*=};&lt;br /&gt;
        fi&lt;br /&gt;
        # getting 'host_ifname' parameter value&lt;br /&gt;
        if [[ &amp;quot;$str&amp;quot; =~ &amp;quot;^host_ifname=&amp;quot; ]]; then&lt;br /&gt;
                # remove the parameter name from the string (along with '=')&lt;br /&gt;
                VZHOSTIF=${str#*=};&lt;br /&gt;
        fi&lt;br /&gt;
done&lt;br /&gt;
&lt;br /&gt;
if [ ! -n &amp;quot;$VETH_IP_ADDRESS&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;According to $CONFIGFILE VE$VEID has no veth IPs configured.&amp;quot;&lt;br /&gt;
   exit 1&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
if [ ! -n &amp;quot;$VZHOSTIF&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;According to $CONFIGFILE VE$VEID has no veth interface configured.&amp;quot;&lt;br /&gt;
   exit 1&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
if [ ! -n &amp;quot;$VEIFNAME&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;Corrupted $CONFIGFILE: no 'ifname' defined for host_ifname $VZHOSTIF.&amp;quot;&lt;br /&gt;
   exit 1&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Initializing interface $VZHOSTIF for VE$VEID.&amp;quot;&lt;br /&gt;
$ifconfig $VZHOSTIF 0&lt;br /&gt;
&lt;br /&gt;
VEROUTEDEV=$VZHOSTIF&lt;br /&gt;
&lt;br /&gt;
if [ -n &amp;quot;$BRIDGEDEV&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;Adding interface $VZHOSTIF to the bridge $BRIDGEDEV.&amp;quot;&lt;br /&gt;
   VEROUTEDEV=$BRIDGEDEV&lt;br /&gt;
   $brctl addif $BRIDGEDEV $VZHOSTIF&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
# Up the interface $VEIFNAME link in VE$VEID&lt;br /&gt;
$vzctl exec $VEID $ip link set $VEIFNAME up&lt;br /&gt;
&lt;br /&gt;
for IP in $VETH_IP_ADDRESS; do&lt;br /&gt;
   echo &amp;quot;Adding an IP $IP to the $VEIFNAME for VE$VEID.&amp;quot;&lt;br /&gt;
   $vzctl exec $VEID $ip address add $IP dev $VEIFNAME&lt;br /&gt;
&lt;br /&gt;
   # removing the netmask&lt;br /&gt;
   IP_STRIP=${IP%%/*};&lt;br /&gt;
&lt;br /&gt;
   echo &amp;quot;Adding a route from VE0 to VE$VEID.&amp;quot;&lt;br /&gt;
   $ip route add $IP_STRIP dev $VEROUTEDEV&lt;br /&gt;
done&lt;br /&gt;
&lt;br /&gt;
if [ -n &amp;quot;$VE0_IP&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;Adding a route from VE$VEID to VE0.&amp;quot;&lt;br /&gt;
   $vzctl exec $VEID $ip route add $VE0_IP dev $VEIFNAME&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
if [ -n &amp;quot;$VE_DEFAULT_GATEWAY&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;Setting $VE_DEFAULT_GATEWAY as a default gateway for VE$VEID.&amp;quot;&lt;br /&gt;
   $vzctl exec $VEID \&lt;br /&gt;
        $ip route add default via $VE_DEFAULT_GATEWAY dev $VEIFNAME&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
exit 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Make the script to be run on a VE start ====&lt;br /&gt;
In order to run above script on a VE start create the file &amp;lt;code&amp;gt;/etc/vz/vznet.conf&amp;lt;/code&amp;gt; with the following contents:&lt;br /&gt;
&lt;br /&gt;
 EXTERNAL_SCRIPT=&amp;quot;/usr/sbin/vznetcfg.custom&amp;quot;&lt;br /&gt;
&lt;br /&gt;
{{Note|&amp;lt;code&amp;gt;/usr/sbin/vznetcfg.custom&amp;lt;/code&amp;gt; should be executable.(chmod +x /usr/sbin/vznetcfg.custom)}}&lt;br /&gt;
&lt;br /&gt;
==== Setting the route VE → HN ====&lt;br /&gt;
To set up a route from the VE to the HN, the custom script has to get a HN IP (the $VE0_IP variable in the script). There are several ways to specify it:&lt;br /&gt;
&lt;br /&gt;
# Add an entry VE0_IP=&amp;quot;VE0 IP&amp;quot; to the &amp;lt;code&amp;gt;$VEID.conf&amp;lt;/code&amp;gt;&lt;br /&gt;
# Add an entry VE0_IP=&amp;quot;VE0 IP&amp;quot; to the &amp;lt;code&amp;gt;/etc/vz/vz.conf&amp;lt;/code&amp;gt; (the global configuration config file)&lt;br /&gt;
# Implement some smart algorithm to determine the VE0 IP right in the custom network configuration script&lt;br /&gt;
&lt;br /&gt;
Each variant has its pros and cons, nevertheless for HN static IP configuration variant 2 seems to be acceptable (and the most simple).&lt;br /&gt;
&lt;br /&gt;
== An OpenVZ Hardware Node has two Ethernet interfaces ==&lt;br /&gt;
Assuming you have 2 interfaces eth0 and eth1 and want to separate local traffic (10.0.0.0/24) from  external traffic.&lt;br /&gt;
Let's assign eth0 for the external traffic and eth1 for the local one.&lt;br /&gt;
&lt;br /&gt;
If there is no need to make the VE accessible from the HN and vice versa, it's enough to replace 'br0' with 'eth1' in the following steps of the above configuration:&lt;br /&gt;
* Hardware Node configuration → [[Using_private_IPs_for_Hardware_Nodes#Assign_the_IP_to_the_bridge|Assign the IP to the bridge]]&lt;br /&gt;
* Hardware Node configuration → [[Using_private_IPs_for_Hardware_Nodes#Resurrect_the_default_routing|Resurrect the default routing]]&lt;br /&gt;
&lt;br /&gt;
It is nesessary to set a local IP for 'br0' to ensure VE ↔ HN connection availability.&lt;br /&gt;
&lt;br /&gt;
== Putting VEs to different subnetworks ==&lt;br /&gt;
It's enough to set up the correct $VETH_IP_ADDRESS and $VE_DEFAULT_GATEWAY values in the &lt;br /&gt;
[[Using_private_IPs_for_Hardware_Nodes#Edit_the_VE.27s_configuration|above configuration]].&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Virtual network device]]&lt;br /&gt;
* [[Differences between venet and veth]]&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;br /&gt;
[[Category: Networking]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Quick_installation_(legacy)&amp;diff=3652</id>
		<title>Quick installation (legacy)</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Quick_installation_(legacy)&amp;diff=3652"/>
		<updated>2007-11-21T12:33:10Z</updated>

		<summary type="html">&lt;p&gt;Finist: Make link on the first usage of &amp;quot;HN&amp;quot;.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This document briefly describes the steps needed to install OpenVZ on your machine.&lt;br /&gt;
&lt;br /&gt;
This document is also available in the following languages: [http://forum.openvz.org/index.php?t=tree&amp;amp;amp;goto=35&amp;amp;amp;#msg_35 French],  [http://forum.openvz.org/index.php?t=tree&amp;amp;amp;goto=1805&amp;amp;amp;#msg_1805 German],&lt;br /&gt;
[http://wiki.openvz.jp Japanese],&lt;br /&gt;
[[Quick_installation_(Spanish)|Spanish]].&lt;br /&gt;
&lt;br /&gt;
OpenVZ consists of a kernel, user-level tools, and VE templates. This guide tells how to install the kernel and the tools.&lt;br /&gt;
&lt;br /&gt;
== Requirements ==&lt;br /&gt;
This guide assumes you are running recent release of Fedora Core (like FC5) or RHEL/CentOS 4. Currently, OpenVZ kernel tries to support the same hardware that Red Hat kernels support. For full hardware compatibility list, see [http://www.swsoft.com/en/products/virtuozzo/hcl/ Virtuozzo HCL].&lt;br /&gt;
&lt;br /&gt;
=== Filesystems ===&lt;br /&gt;
It is recommended to use a separate partition for VEs private directories (by default /vz/private/&amp;lt;veid&amp;gt;). The reason why you should do so is that if you wish to use OpenVZ per-VE disk quota, you won't be able to use usual Linux disk quotas on the same partition. Bear in mind, that per-VE quota in this context includes not only pure per-VE quota, but also usual Linux disk quota used in VE, not on [[HN]].&lt;br /&gt;
&lt;br /&gt;
At least try to avoid using root partition for VEs, because the root user of VE will be able to overcome 5% disk space barrier in some situations. This way HN root partition can be completely filled and it will break the system.&lt;br /&gt;
&lt;br /&gt;
OpenVZ per-VE disk quota is supported only for ext2/ext3 filesystems. So use one of these filesystems (ext3 is recommended) if you need per-VE disk quota.&lt;br /&gt;
&lt;br /&gt;
=== rpm or yum? ===&lt;br /&gt;
&lt;br /&gt;
In case you have yum utility available on your system, you may want to use it effectively to install and update OpenVZ packages. In case you don't have yum, or don't want to use it, you can use plain old rpm. Instructions for both rpm and yum are provided below.&lt;br /&gt;
&lt;br /&gt;
=== yum pre-setup ===&lt;br /&gt;
If you want to use yum, you should set up OpenVZ yum repository first.&lt;br /&gt;
&lt;br /&gt;
Download [http://download.openvz.org/openvz.repo openvz.repo] file and put it to your &amp;lt;code&amp;gt;/etc/yum.repos.d/&amp;lt;/code&amp;gt; repository. This can be achieved by the following commands, as root:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# cd /etc/yum.repos.d&lt;br /&gt;
# wget http://download.openvz.org/openvz.repo&lt;br /&gt;
# rpm --import  http://download.openvz.org/RPM-GPG-Key-OpenVZ&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In case you can not cd to /etc/yum.repos.d, it means either yum is not installed on your system, or yum version is too old. In that case, just stick to rpm installation method.&lt;br /&gt;
&lt;br /&gt;
== Kernel installation ==&lt;br /&gt;
&lt;br /&gt;
{{Note|In case you want to recompile the kernel yourself rather than use the one provided by OpenVZ, see [[kernel build]].}}&lt;br /&gt;
&lt;br /&gt;
First, you need to choose what “flavor” of the kernel you want to install. Please refer to [[Kernel flavors]] for more information.&lt;br /&gt;
&lt;br /&gt;
=== Using yum ===&lt;br /&gt;
Run the following command&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# yum install ovzkernel[-flavor]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here &amp;lt;code&amp;gt;[-flavor]&amp;lt;/code&amp;gt; is optional, and can be &amp;lt;code&amp;gt;-smp&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;-enterprise&amp;lt;/code&amp;gt;. Refer to [[kernel flavors]] for more info.&lt;br /&gt;
&lt;br /&gt;
=== Using rpm ===&lt;br /&gt;
Get the kernel binary RPM from the [http://openvz.org/download/kernel/ Download » Kernel] page, or directly from [http://download.openvz.org/kernel/ download.openvz.org/kernel], or from one of its [[Download mirrors|mirrors]]. You need only one kernel RPM so please [[Kernel flavors|choose the appropriate one]] depending on your hardware.&lt;br /&gt;
&lt;br /&gt;
Next, install the kernel RPM you chose:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# rpm -ihv ovzkernel[-flavor]*.rpm&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here &amp;lt;code&amp;gt;[-flavor]&amp;lt;/code&amp;gt; is optional, and can be &amp;lt;code&amp;gt;-smp&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;-enterprise&amp;lt;/code&amp;gt;. Refer to [[kernel flavors]] for more info.&lt;br /&gt;
&lt;br /&gt;
{{Note|&amp;lt;tt&amp;gt;rpm -U&amp;lt;/tt&amp;gt; (where &amp;lt;tt&amp;gt;-U&amp;lt;/tt&amp;gt; stands for ''upgrade'') should '''not''' be used, otherwise all currently installed kernels will be uninstalled.}}&lt;br /&gt;
&lt;br /&gt;
== Configuring the bootloader ==&lt;br /&gt;
&lt;br /&gt;
In case GRUB is used as the boot loader, it will be configured automatically: lines similar to these will be added to the &amp;lt;tt&amp;gt;/boot/grub/grub.conf&amp;lt;/tt&amp;gt; file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
title Fedora Core (2.6.8-022stab029.1)&lt;br /&gt;
       root (hd0,0)&lt;br /&gt;
       kernel /vmlinuz-2.6.8-022stab029.1 ro root=/dev/sda5 quiet rhgb vga=0x31B&lt;br /&gt;
       initrd /initrd-2.6.8-022stab029.1.img&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Change &amp;lt;tt&amp;gt;Fedora Core&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;OpenVZ&amp;lt;/tt&amp;gt; (just for clarity reasons, so the OpenVZ kernels will not be mixed up with non-OpenVZ ones). Remove extra arguments from the kernel line, leaving only the &amp;lt;tt&amp;gt;root=...&amp;lt;/tt&amp;gt; parameter. The modifed portion of &amp;lt;tt&amp;gt;/etc/grub.conf&amp;lt;/tt&amp;gt; should look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
title OpenVZ (2.6.8-022stab029.1)&lt;br /&gt;
        root (hd0,0)&lt;br /&gt;
        kernel /vmlinuz-2.6.8-022stab029.1 ro root=/dev/sda5&lt;br /&gt;
        initrd /initrd-2.6.8-022stab029.1.img&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Configuring ==&lt;br /&gt;
&lt;br /&gt;
Please make sure the following steps are performed before rebooting into OpenVZ kernel.&lt;br /&gt;
&lt;br /&gt;
=== sysctl ===&lt;br /&gt;
&lt;br /&gt;
There are a number of kernel parameters that should be set for OpenVZ to work correctly. These parameters are stored in &amp;lt;tt&amp;gt;/etc/sysctl.conf&amp;lt;/tt&amp;gt; file. Here is the relevant part of the file; please edit it accordingly.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# On Hardware Node we generally need&lt;br /&gt;
# packet forwarding enabled and proxy arp disabled&lt;br /&gt;
net.ipv4.ip_forward = 1&lt;br /&gt;
net.ipv4.conf.default.proxy_arp = 0&lt;br /&gt;
# Enables source route verification&lt;br /&gt;
net.ipv4.conf.all.rp_filter = 1&lt;br /&gt;
# Enables the magic-sysrq key&lt;br /&gt;
kernel.sysrq = 1&lt;br /&gt;
# TCP Explict Congestion Notification&lt;br /&gt;
#net.ipv4.tcp_ecn = 0&lt;br /&gt;
# we do not want all our interfaces to send redirects&lt;br /&gt;
net.ipv4.conf.default.send_redirects = 1&lt;br /&gt;
net.ipv4.conf.all.send_redirects = 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== SELinux ===&lt;br /&gt;
&lt;br /&gt;
SELinux should be disabled. To that effect, put the following line to &amp;lt;code&amp;gt;/etc/sysconfig/selinux&amp;lt;/code&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
SELINUX=disabled&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Conntracks ===&lt;br /&gt;
&lt;br /&gt;
In the stable OpenVZ kernels (those that are 2.6.8-based) netfilter connection tracking for [[VE0]] is disabled by default. If you have a stateful firewall enabled on the host node (it is there by default) you should either disable it, or enable connection tracking for [[VE0]].&lt;br /&gt;
&lt;br /&gt;
To enable conntracks for VE0, add the following line to &amp;lt;code&amp;gt;/etc/modprobe.conf&amp;lt;/code&amp;gt; file:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
options ip_conntrack ip_conntrack_enable_ve0=1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Note|In kernels later than 2.6.8, connection tracking is enabled by default.}}&lt;br /&gt;
&lt;br /&gt;
== Rebooting into OpenVZ kernel ==&lt;br /&gt;
&lt;br /&gt;
Now reboot the machine and choose &amp;quot;OpenVZ&amp;quot; on the boot loader menu. If the OpenVZ kernel has been booted successfully, proceed to installing the user-level tools for OpenVZ.&lt;br /&gt;
&lt;br /&gt;
== Installing the utilities ==&lt;br /&gt;
&lt;br /&gt;
OpenVZ needs some user-level tools installed. Those are:&lt;br /&gt;
&lt;br /&gt;
; vzctl&lt;br /&gt;
:    A utility to control OpenVZ VPSs (create, destroy, start, stop, set parameters etc.)&lt;br /&gt;
; vzquota&lt;br /&gt;
:    A utility to manage quotas for VPSs. Mostly used indirectly (by vzctl).&lt;br /&gt;
&lt;br /&gt;
=== Using yum ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# yum install vzctl vzquota&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Using rpm ===&lt;br /&gt;
&lt;br /&gt;
Download the binary RPMs of these utilities from [http://openvz.org/download/utils/ Download » Utils], or directly from [http://download.openvz.org/utils/ download.openvz.org/utils], or from one of its [[Download mirrors|mirrors]]. Install them:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# rpm -Uhv vzctl*.rpm vzquota*.rpm&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If rpm complains about unresolved dependencies, you'll have to satisfy them first, then repeat the installation.&lt;br /&gt;
&lt;br /&gt;
When all the tools are installed, start the OpenVZ subsystem.&lt;br /&gt;
&lt;br /&gt;
== Starting OpenVZ ==&lt;br /&gt;
&lt;br /&gt;
As root, execute the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# /sbin/service vz start&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will load all the needed OpenVZ kernel modules. This script should also start all the VPSs marked to be auto-started on machine boot (there aren't any yet).&lt;br /&gt;
&lt;br /&gt;
During the next reboot, this script should be executed automatically.&lt;br /&gt;
&lt;br /&gt;
== Next steps ==&lt;br /&gt;
&lt;br /&gt;
OpenVZ is now set up on your machine. To load OpenVZ kernel by default, edit the default line in the /boot/grub/grub.conf file to point to the OpenVZ kernel. For example, if the OpenVZ kernel is the first kernel mentioned in the file, put it as default 0. See man grub.conf for more details.&lt;br /&gt;
&lt;br /&gt;
The next step is to prepare the [[OS template]]: please continue to [[OS template cache preparation]] document.&lt;br /&gt;
&lt;br /&gt;
[[Category: Installation]]&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Using_private_IPs_for_Hardware_Nodes&amp;diff=3456</id>
		<title>Using private IPs for Hardware Nodes</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Using_private_IPs_for_Hardware_Nodes&amp;diff=3456"/>
		<updated>2007-09-13T13:57:39Z</updated>

		<summary type="html">&lt;p&gt;Finist: /etc/vz/vznet.conf doesn't hasve to be executable&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article describes how to assign public IPs to VEs running on OVZ Hardware Nodes in case you have a following network topology:&lt;br /&gt;
[[Image:PrivateIPs_fig1.gif|An initial network topology]]&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
This configuration was tested on a RHEL5 OVZ Hardware Node and a VE based on a Fedora Core 5 template.&lt;br /&gt;
Other host OSes and templates might require some configuration changes, please, add corresponding OS specific changes if you've faced any.&amp;lt;br&amp;gt;&lt;br /&gt;
This article assumes the presence of 'brctl', 'ip', 'ifconfig' utils thus might require installation of missed packages like 'bridge-utils'/'iproute'/'net-tools' or others which contain those utilities.&lt;br /&gt;
&lt;br /&gt;
This article assumes you have already [[Quick installation|installed OpenVZ]], prepared the [[OS template cache]](s) and have [[Basic_operations_in_OpenVZ_environment|VE(s) created]]. If not, follow the links to perform the steps needed.&lt;br /&gt;
{{Note|don't assign an IP after VE creation.}}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== (1) An OVZ Hardware Node has the only one ethernet interface ==&lt;br /&gt;
(assume eth0)&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;u&amp;gt;Hardware Node configuration&amp;lt;/u&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
==== Create a bridge device ====&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# brctl addbr br0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Remove an IP from eth0 interface ====&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# ifconfig eth0 0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Add eth0 interface into the bridge ====&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# brctl addif br0 eth0&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
==== Assign the IP to the bridge ====&lt;br /&gt;
(the same that was assigned on eth0 earlier)&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# ifconfig br0 10.0.0.2/24&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Resurrect the default routing ====&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# ip route add default via 10.0.0.1 dev br0&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
{{Note|if you are '''configuring''' the node '''remotely''' you '''must''' prepare a '''script''' with the above commands and run it in background with the redirected output or you'll '''lose the access''' to the Node.}}&lt;br /&gt;
&lt;br /&gt;
==== A script example ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[HN]# cat /tmp/br_add &lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
brctl addbr br0&lt;br /&gt;
ifconfig eth0 0 &lt;br /&gt;
brctl addif br0 eth0 &lt;br /&gt;
ifconfig br0 10.0.0.2/24 &lt;br /&gt;
ip route add default via 10.0.0.1 dev br0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# /tmp/br_add &amp;gt;/dev/null 2&amp;gt;&amp;amp;1 &amp;amp;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== &amp;lt;u&amp;gt;VE configuration&amp;lt;/u&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
==== Start a VE ====&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# vzctl start 101&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Add a [[Virtual_Ethernet_device|veth interface]] to the VE ====&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# vzctl set 101 --netif_add eth0 --save&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Set up an IP to the newly created VE's veth interface ====&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# vzctl exec 101 ifconfig eth0 85.86.87.195/26&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
==== Add the VE's veth interface to the bridge ====&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# brctl addif br0 veth101.0&amp;lt;/pre&amp;gt;&lt;br /&gt;
{{Note|veth interface will work as expected only after it is turned by the bridge into the forwarding state after some delay.&lt;br /&gt;
In 2.6.18 kernel it is 15 sec by default.&lt;br /&gt;
&amp;lt;!-- /sys/class/net/$BR_NAME/bridge/forward_delay in SEC*USER_HZ --&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
==== Set up the default route for the VE ====&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# vzctl exec 101 ip route add default via 85.86.87.193 dev eth0&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
==== (Optional) Add routes VE &amp;lt;-&amp;gt; HN ====&lt;br /&gt;
The configuration above provides following connections available:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
VE X &amp;lt;-&amp;gt; VE Y (where VE X and VE Y can locate on any OVZ HN)&lt;br /&gt;
VE   &amp;lt;-&amp;gt; Internet&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* A VE accessibility from the HN depends on if the local gateway provides NAT or not (probably - yes).&lt;br /&gt;
* A HN accessibility from a VE depends on if the ISP gateway is aware about the local network addresses (most probably - no).&lt;br /&gt;
&lt;br /&gt;
So to provide VE &amp;lt;-&amp;gt; HN accessibility despite the gateways' configuration you can add following route rules:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[HN]# ip route add 85.86.87.195 dev br0&lt;br /&gt;
[HN]# vzctl exec 101 ip route add 10.0.0.2 dev eth0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;u&amp;gt;The resulted OVZ Node configuration&amp;lt;/u&amp;gt; ===&lt;br /&gt;
[[Image:PrivateIPs_fig2.gif|The resulted OVZ Node configuration]]&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;u&amp;gt;Making the configuration persistent&amp;lt;/u&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
==== Set up a bridge on a HN ====&lt;br /&gt;
This can be done by configuring &amp;lt;code&amp;gt;ifcfg-*&amp;lt;/code&amp;gt; files located in &amp;lt;code&amp;gt;/etc/sysconfig/network-scripts/&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Assuming you had a configuration file (e.g. &amp;lt;code&amp;gt;ifcfg-eth0&amp;lt;/code&amp;gt;) like:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
DEVICE=eth0&lt;br /&gt;
ONBOOT=yes&lt;br /&gt;
IPADDR=10.0.0.2&lt;br /&gt;
NETMASK=255.255.255.0&lt;br /&gt;
GATEWAY=10.0.0.1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To make bridge &amp;lt;code&amp;gt;br0&amp;lt;/code&amp;gt; automatically created you can create &amp;lt;code&amp;gt;ifcfg-br0&amp;lt;/code&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
DEVICE=br0&lt;br /&gt;
TYPE=Bridge&lt;br /&gt;
ONBOOT=yes&lt;br /&gt;
IPADDR=10.0.0.2&lt;br /&gt;
NETMASK=255.255.255.0&lt;br /&gt;
GATEWAY=10.0.0.1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and edit &amp;lt;code&amp;gt;ifcfg-eth0&amp;lt;/code&amp;gt; file to add &amp;lt;code&amp;gt;eth0&amp;lt;/code&amp;gt; interface into the bridge &amp;lt;code&amp;gt;br0&amp;lt;/code&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
DEVICE=eth0&lt;br /&gt;
ONBOOT=yes&lt;br /&gt;
BRIDGE=br0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Edit the VE's configuration ====&lt;br /&gt;
Add some parameters to the &amp;lt;code&amp;gt;/etc/vz/conf/$VEID.conf&amp;lt;/code&amp;gt; which will be used during the network configuration:&lt;br /&gt;
* Add/change CONFIG_CUSTOMIZED=&amp;quot;yes&amp;quot; (indicates that a custom script should be run on a VE start)&lt;br /&gt;
* Add VETH_IP_ADDRESS=&amp;quot;&amp;lt;VE IP&amp;gt;/&amp;lt;MASK&amp;gt;&amp;quot; (a VE can have multiple IPs separated by spaces)&lt;br /&gt;
* Add VE_DEFAULT_GATEWAY=&amp;quot;&amp;lt;VE DEFAULT GATEWAY&amp;gt;&amp;quot;&lt;br /&gt;
* Add BRIDGEDEV=&amp;quot;&amp;lt;BRIDGE NAME&amp;gt;&amp;quot; (a bridge name to which the VE veth interface should be added)&lt;br /&gt;
&lt;br /&gt;
An example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Network customization section&lt;br /&gt;
CONFIG_CUSTOMIZED=&amp;quot;yes&amp;quot;&lt;br /&gt;
VETH_IP_ADDRESS=&amp;quot;85.86.87.195/26&amp;quot;&lt;br /&gt;
VE_DEFAULT_GATEWAY=&amp;quot;85.86.87.193&amp;quot;&lt;br /&gt;
BRIDGEDEV=&amp;quot;br0&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Create a custom network configuration script ====&lt;br /&gt;
which should be called each time a VE started (e.g. &amp;lt;code&amp;gt;/usr/sbin/vznetcfg.custom&amp;lt;/code&amp;gt;):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# /usr/sbin/vznetcfg.custom&lt;br /&gt;
# a script to bring up bridged network interfaces (veth's) in a VE&lt;br /&gt;
&lt;br /&gt;
GLOBALCONFIGFILE=/etc/vz/vz.conf&lt;br /&gt;
VECONFIGFILE=/etc/vz/conf/$VEID.conf&lt;br /&gt;
vzctl=/usr/sbin/vzctl&lt;br /&gt;
brctl=/usr/sbin/brctl&lt;br /&gt;
ip=/sbin/ip&lt;br /&gt;
ifconfig=/sbin/ifconfig&lt;br /&gt;
. $GLOBALCONFIGFILE&lt;br /&gt;
. $VECONFIGFILE&lt;br /&gt;
&lt;br /&gt;
NETIF_OPTIONS=`echo $NETIF | sed 's/,/\n/g'`&lt;br /&gt;
for str in $NETIF_OPTIONS; do \&lt;br /&gt;
        # getting 'ifname' parameter value&lt;br /&gt;
        if [[ &amp;quot;$str&amp;quot; =~ &amp;quot;^ifname=&amp;quot; ]]; then&lt;br /&gt;
                # remove the parameter name from the string (along with '=')&lt;br /&gt;
                VEIFNAME=${str#*=};&lt;br /&gt;
        fi&lt;br /&gt;
        # getting 'host_ifname' parameter value&lt;br /&gt;
        if [[ &amp;quot;$str&amp;quot; =~ &amp;quot;^host_ifname=&amp;quot; ]]; then&lt;br /&gt;
                # remove the parameter name from the string (along with '=')&lt;br /&gt;
                VZHOSTIF=${str#*=};&lt;br /&gt;
        fi&lt;br /&gt;
done&lt;br /&gt;
&lt;br /&gt;
if [ ! -n &amp;quot;$VETH_IP_ADDRESS&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;According to $CONFIGFILE VE$VEID has no veth IPs configured.&amp;quot;&lt;br /&gt;
   exit 1&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
if [ ! -n &amp;quot;$VZHOSTIF&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;According to $CONFIGFILE VE$VEID has no veth interface configured.&amp;quot;&lt;br /&gt;
   exit 1&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
if [ ! -n &amp;quot;$VEIFNAME&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;Corrupted $CONFIGFILE: no 'ifname' defined for host_ifname $VZHOSTIF.&amp;quot;&lt;br /&gt;
   exit 1&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Initializing interface $VZHOSTIF for VE$VEID.&amp;quot;&lt;br /&gt;
$ifconfig $VZHOSTIF 0&lt;br /&gt;
&lt;br /&gt;
VEROUTEDEV=$VZHOSTIF&lt;br /&gt;
&lt;br /&gt;
if [ -n &amp;quot;$BRIDGEDEV&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;Adding interface $VZHOSTIF to the bridge $BRIDGEDEV.&amp;quot;&lt;br /&gt;
   VEROUTEDEV=$BRIDGEDEV&lt;br /&gt;
   $brctl addif $BRIDGEDEV $VZHOSTIF&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
# Up the interface $VEIFNAME link in VE$VEID&lt;br /&gt;
$vzctl exec $VEID $ip link set $VEIFNAME up&lt;br /&gt;
&lt;br /&gt;
for IP in $VETH_IP_ADDRESS; do&lt;br /&gt;
   echo &amp;quot;Adding an IP $IP to the $VEIFNAME for VE$VEID.&amp;quot;&lt;br /&gt;
   $vzctl exec $VEID $ip address add $IP dev $VEIFNAME&lt;br /&gt;
&lt;br /&gt;
   # removing the netmask&lt;br /&gt;
   IP_STRIP=${IP%%/*};&lt;br /&gt;
&lt;br /&gt;
   echo &amp;quot;Adding a route from VE0 to VE$VEID.&amp;quot;&lt;br /&gt;
   $ip route add $IP_STRIP dev $VEROUTEDEV&lt;br /&gt;
done&lt;br /&gt;
&lt;br /&gt;
if [ -n &amp;quot;$VE0_IP&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;Adding a route from VE$VEID to VE0.&amp;quot;&lt;br /&gt;
   $vzctl exec $VEID $ip route add $VE0_IP dev $VEIFNAME&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
if [ -n &amp;quot;$VE_DEFAULT_GATEWAY&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;Setting $VE_DEFAULT_GATEWAY as a default gateway for VE$VEID.&amp;quot;&lt;br /&gt;
   $vzctl exec $VEID \&lt;br /&gt;
        $ip route add default via $VE_DEFAULT_GATEWAY dev $VEIFNAME&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
exit 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Make the script to be run on a VE start ====&lt;br /&gt;
In order to run above script on a VE start create the following &amp;lt;code&amp;gt;/etc/vz/vznet.conf&amp;lt;/code&amp;gt; file:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
EXTERNAL_SCRIPT=&amp;quot;/usr/sbin/vznetcfg.custom&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
{{Note|&amp;lt;code&amp;gt;/usr/sbin/vznetcfg.custom&amp;lt;/code&amp;gt; should be executable.}}&lt;br /&gt;
&lt;br /&gt;
==== Setting the route VE -&amp;gt; HN ====&lt;br /&gt;
To set up a route from VE to HN the custom script has to get a HN IP (the $VE0_IP variable in the script). There can be different approaches to specify it:&lt;br /&gt;
# Add an entry VE0_IP=&amp;quot;VE0 IP&amp;quot; to the &amp;lt;code&amp;gt;$VEID.conf&amp;lt;/code&amp;gt;&lt;br /&gt;
# Add an entry VE0_IP=&amp;quot;VE0 IP&amp;quot; to the &amp;lt;code&amp;gt;/etc/vz/vz.conf&amp;lt;/code&amp;gt; (the global configuration config file)&lt;br /&gt;
# Implement some smart algorithm to determine the VE0 IP right in the custom network configuration script&lt;br /&gt;
Every variant has its pros and cons, nevertheless for HN static IP configuration variant 2 seems to be acceptable (and the most simple).&lt;br /&gt;
&lt;br /&gt;
== (2) An OVZ Hardware Node has two ethernet interfaces ==&lt;br /&gt;
Assume you have 2 interfaces eth0 and eth1 and want to separate local traffic (10.0.0.0/24) from the external traffic.&lt;br /&gt;
Let's assign eth0 for the external traffic and eth1 for the local one.&lt;br /&gt;
&lt;br /&gt;
If there is no aim to make VE accessible from HN and vice versa, it's enough to replace 'br0' with 'eth1' in the following steps of above configuration:&lt;br /&gt;
* Hardware Node configuration -&amp;gt; [[Using_private_IPs_for_Hardware_Nodes#Assign_the_IP_to_the_bridge|Assign the IP to the bridge]]&lt;br /&gt;
* Hardware Node configuration -&amp;gt; [[Using_private_IPs_for_Hardware_Nodes#Resurrect_the_default_routing|Resurrect the default routing]]&lt;br /&gt;
&lt;br /&gt;
For the VE &amp;lt;-&amp;gt; HN connections availability it's nesessary to set an IP (local) to the 'br0'.&lt;br /&gt;
&lt;br /&gt;
== (3) Putting VEs to different subnetworks ==&lt;br /&gt;
It's enough to set up the correct $VETH_IP_ADDRESS and $VE_DEFAULT_GATEWAY values in the &lt;br /&gt;
[[Using_private_IPs_for_Hardware_Nodes#Edit_the_VE.27s_configuration|above configuration]].&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Virtual network device]]&lt;br /&gt;
* [[Differences between venet and veth]]&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;br /&gt;
[[Category: Networking]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Using_private_IPs_for_Hardware_Nodes&amp;diff=3455</id>
		<title>Using private IPs for Hardware Nodes</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Using_private_IPs_for_Hardware_Nodes&amp;diff=3455"/>
		<updated>2007-09-13T13:49:36Z</updated>

		<summary type="html">&lt;p&gt;Finist: Initialize a veth interface on HN only once.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article describes how to assign public IPs to VEs running on OVZ Hardware Nodes in case you have a following network topology:&lt;br /&gt;
[[Image:PrivateIPs_fig1.gif|An initial network topology]]&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
This configuration was tested on a RHEL5 OVZ Hardware Node and a VE based on a Fedora Core 5 template.&lt;br /&gt;
Other host OSes and templates might require some configuration changes, please, add corresponding OS specific changes if you've faced any.&amp;lt;br&amp;gt;&lt;br /&gt;
This article assumes the presence of 'brctl', 'ip', 'ifconfig' utils thus might require installation of missed packages like 'bridge-utils'/'iproute'/'net-tools' or others which contain those utilities.&lt;br /&gt;
&lt;br /&gt;
This article assumes you have already [[Quick installation|installed OpenVZ]], prepared the [[OS template cache]](s) and have [[Basic_operations_in_OpenVZ_environment|VE(s) created]]. If not, follow the links to perform the steps needed.&lt;br /&gt;
{{Note|don't assign an IP after VE creation.}}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== (1) An OVZ Hardware Node has the only one ethernet interface ==&lt;br /&gt;
(assume eth0)&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;u&amp;gt;Hardware Node configuration&amp;lt;/u&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
==== Create a bridge device ====&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# brctl addbr br0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Remove an IP from eth0 interface ====&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# ifconfig eth0 0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Add eth0 interface into the bridge ====&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# brctl addif br0 eth0&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
==== Assign the IP to the bridge ====&lt;br /&gt;
(the same that was assigned on eth0 earlier)&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# ifconfig br0 10.0.0.2/24&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Resurrect the default routing ====&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# ip route add default via 10.0.0.1 dev br0&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
{{Note|if you are '''configuring''' the node '''remotely''' you '''must''' prepare a '''script''' with the above commands and run it in background with the redirected output or you'll '''lose the access''' to the Node.}}&lt;br /&gt;
&lt;br /&gt;
==== A script example ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[HN]# cat /tmp/br_add &lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
brctl addbr br0&lt;br /&gt;
ifconfig eth0 0 &lt;br /&gt;
brctl addif br0 eth0 &lt;br /&gt;
ifconfig br0 10.0.0.2/24 &lt;br /&gt;
ip route add default via 10.0.0.1 dev br0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# /tmp/br_add &amp;gt;/dev/null 2&amp;gt;&amp;amp;1 &amp;amp;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== &amp;lt;u&amp;gt;VE configuration&amp;lt;/u&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
==== Start a VE ====&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# vzctl start 101&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Add a [[Virtual_Ethernet_device|veth interface]] to the VE ====&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# vzctl set 101 --netif_add eth0 --save&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Set up an IP to the newly created VE's veth interface ====&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# vzctl exec 101 ifconfig eth0 85.86.87.195/26&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
==== Add the VE's veth interface to the bridge ====&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# brctl addif br0 veth101.0&amp;lt;/pre&amp;gt;&lt;br /&gt;
{{Note|veth interface will work as expected only after it is turned by the bridge into the forwarding state after some delay.&lt;br /&gt;
In 2.6.18 kernel it is 15 sec by default.&lt;br /&gt;
&amp;lt;!-- /sys/class/net/$BR_NAME/bridge/forward_delay in SEC*USER_HZ --&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
==== Set up the default route for the VE ====&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# vzctl exec 101 ip route add default via 85.86.87.193 dev eth0&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
==== (Optional) Add routes VE &amp;lt;-&amp;gt; HN ====&lt;br /&gt;
The configuration above provides following connections available:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
VE X &amp;lt;-&amp;gt; VE Y (where VE X and VE Y can locate on any OVZ HN)&lt;br /&gt;
VE   &amp;lt;-&amp;gt; Internet&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* A VE accessibility from the HN depends on if the local gateway provides NAT or not (probably - yes).&lt;br /&gt;
* A HN accessibility from a VE depends on if the ISP gateway is aware about the local network addresses (most probably - no).&lt;br /&gt;
&lt;br /&gt;
So to provide VE &amp;lt;-&amp;gt; HN accessibility despite the gateways' configuration you can add following route rules:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[HN]# ip route add 85.86.87.195 dev br0&lt;br /&gt;
[HN]# vzctl exec 101 ip route add 10.0.0.2 dev eth0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;u&amp;gt;The resulted OVZ Node configuration&amp;lt;/u&amp;gt; ===&lt;br /&gt;
[[Image:PrivateIPs_fig2.gif|The resulted OVZ Node configuration]]&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;u&amp;gt;Making the configuration persistent&amp;lt;/u&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
==== Set up a bridge on a HN ====&lt;br /&gt;
This can be done by configuring &amp;lt;code&amp;gt;ifcfg-*&amp;lt;/code&amp;gt; files located in &amp;lt;code&amp;gt;/etc/sysconfig/network-scripts/&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Assuming you had a configuration file (e.g. &amp;lt;code&amp;gt;ifcfg-eth0&amp;lt;/code&amp;gt;) like:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
DEVICE=eth0&lt;br /&gt;
ONBOOT=yes&lt;br /&gt;
IPADDR=10.0.0.2&lt;br /&gt;
NETMASK=255.255.255.0&lt;br /&gt;
GATEWAY=10.0.0.1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To make bridge &amp;lt;code&amp;gt;br0&amp;lt;/code&amp;gt; automatically created you can create &amp;lt;code&amp;gt;ifcfg-br0&amp;lt;/code&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
DEVICE=br0&lt;br /&gt;
TYPE=Bridge&lt;br /&gt;
ONBOOT=yes&lt;br /&gt;
IPADDR=10.0.0.2&lt;br /&gt;
NETMASK=255.255.255.0&lt;br /&gt;
GATEWAY=10.0.0.1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and edit &amp;lt;code&amp;gt;ifcfg-eth0&amp;lt;/code&amp;gt; file to add &amp;lt;code&amp;gt;eth0&amp;lt;/code&amp;gt; interface into the bridge &amp;lt;code&amp;gt;br0&amp;lt;/code&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
DEVICE=eth0&lt;br /&gt;
ONBOOT=yes&lt;br /&gt;
BRIDGE=br0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Edit the VE's configuration ====&lt;br /&gt;
Add some parameters to the &amp;lt;code&amp;gt;/etc/vz/conf/$VEID.conf&amp;lt;/code&amp;gt; which will be used during the network configuration:&lt;br /&gt;
* Add/change CONFIG_CUSTOMIZED=&amp;quot;yes&amp;quot; (indicates that a custom script should be run on a VE start)&lt;br /&gt;
* Add VETH_IP_ADDRESS=&amp;quot;&amp;lt;VE IP&amp;gt;/&amp;lt;MASK&amp;gt;&amp;quot; (a VE can have multiple IPs separated by spaces)&lt;br /&gt;
* Add VE_DEFAULT_GATEWAY=&amp;quot;&amp;lt;VE DEFAULT GATEWAY&amp;gt;&amp;quot;&lt;br /&gt;
* Add BRIDGEDEV=&amp;quot;&amp;lt;BRIDGE NAME&amp;gt;&amp;quot; (a bridge name to which the VE veth interface should be added)&lt;br /&gt;
&lt;br /&gt;
An example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Network customization section&lt;br /&gt;
CONFIG_CUSTOMIZED=&amp;quot;yes&amp;quot;&lt;br /&gt;
VETH_IP_ADDRESS=&amp;quot;85.86.87.195/26&amp;quot;&lt;br /&gt;
VE_DEFAULT_GATEWAY=&amp;quot;85.86.87.193&amp;quot;&lt;br /&gt;
BRIDGEDEV=&amp;quot;br0&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Create a custom network configuration script ====&lt;br /&gt;
which should be called each time a VE started (e.g. &amp;lt;code&amp;gt;/usr/sbin/vznetcfg.custom&amp;lt;/code&amp;gt;):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# /usr/sbin/vznetcfg.custom&lt;br /&gt;
# a script to bring up bridged network interfaces (veth's) in a VE&lt;br /&gt;
&lt;br /&gt;
GLOBALCONFIGFILE=/etc/vz/vz.conf&lt;br /&gt;
VECONFIGFILE=/etc/vz/conf/$VEID.conf&lt;br /&gt;
vzctl=/usr/sbin/vzctl&lt;br /&gt;
brctl=/usr/sbin/brctl&lt;br /&gt;
ip=/sbin/ip&lt;br /&gt;
ifconfig=/sbin/ifconfig&lt;br /&gt;
. $GLOBALCONFIGFILE&lt;br /&gt;
. $VECONFIGFILE&lt;br /&gt;
&lt;br /&gt;
NETIF_OPTIONS=`echo $NETIF | sed 's/,/\n/g'`&lt;br /&gt;
for str in $NETIF_OPTIONS; do \&lt;br /&gt;
        # getting 'ifname' parameter value&lt;br /&gt;
        if [[ &amp;quot;$str&amp;quot; =~ &amp;quot;^ifname=&amp;quot; ]]; then&lt;br /&gt;
                # remove the parameter name from the string (along with '=')&lt;br /&gt;
                VEIFNAME=${str#*=};&lt;br /&gt;
        fi&lt;br /&gt;
        # getting 'host_ifname' parameter value&lt;br /&gt;
        if [[ &amp;quot;$str&amp;quot; =~ &amp;quot;^host_ifname=&amp;quot; ]]; then&lt;br /&gt;
                # remove the parameter name from the string (along with '=')&lt;br /&gt;
                VZHOSTIF=${str#*=};&lt;br /&gt;
        fi&lt;br /&gt;
done&lt;br /&gt;
&lt;br /&gt;
if [ ! -n &amp;quot;$VETH_IP_ADDRESS&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;According to $CONFIGFILE VE$VEID has no veth IPs configured.&amp;quot;&lt;br /&gt;
   exit 1&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
if [ ! -n &amp;quot;$VZHOSTIF&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;According to $CONFIGFILE VE$VEID has no veth interface configured.&amp;quot;&lt;br /&gt;
   exit 1&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
if [ ! -n &amp;quot;$VEIFNAME&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;Corrupted $CONFIGFILE: no 'ifname' defined for host_ifname $VZHOSTIF.&amp;quot;&lt;br /&gt;
   exit 1&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Initializing interface $VZHOSTIF for VE$VEID.&amp;quot;&lt;br /&gt;
$ifconfig $VZHOSTIF 0&lt;br /&gt;
&lt;br /&gt;
VEROUTEDEV=$VZHOSTIF&lt;br /&gt;
&lt;br /&gt;
if [ -n &amp;quot;$BRIDGEDEV&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;Adding interface $VZHOSTIF to the bridge $BRIDGEDEV.&amp;quot;&lt;br /&gt;
   VEROUTEDEV=$BRIDGEDEV&lt;br /&gt;
   $brctl addif $BRIDGEDEV $VZHOSTIF&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
# Up the interface $VEIFNAME link in VE$VEID&lt;br /&gt;
$vzctl exec $VEID $ip link set $VEIFNAME up&lt;br /&gt;
&lt;br /&gt;
for IP in $VETH_IP_ADDRESS; do&lt;br /&gt;
   echo &amp;quot;Adding an IP $IP to the $VEIFNAME for VE$VEID.&amp;quot;&lt;br /&gt;
   $vzctl exec $VEID $ip address add $IP dev $VEIFNAME&lt;br /&gt;
&lt;br /&gt;
   # removing the netmask&lt;br /&gt;
   IP_STRIP=${IP%%/*};&lt;br /&gt;
&lt;br /&gt;
   echo &amp;quot;Adding a route from VE0 to VE$VEID.&amp;quot;&lt;br /&gt;
   $ip route add $IP_STRIP dev $VEROUTEDEV&lt;br /&gt;
done&lt;br /&gt;
&lt;br /&gt;
if [ -n &amp;quot;$VE0_IP&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;Adding a route from VE$VEID to VE0.&amp;quot;&lt;br /&gt;
   $vzctl exec $VEID $ip route add $VE0_IP dev $VEIFNAME&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
if [ -n &amp;quot;$VE_DEFAULT_GATEWAY&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;Setting $VE_DEFAULT_GATEWAY as a default gateway for VE$VEID.&amp;quot;&lt;br /&gt;
   $vzctl exec $VEID \&lt;br /&gt;
        $ip route add default via $VE_DEFAULT_GATEWAY dev $VEIFNAME&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
exit 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Make the script to be run on a VE start ====&lt;br /&gt;
In order to run above script on a VE start create the following &amp;lt;code&amp;gt;/etc/vz/vznet.conf&amp;lt;/code&amp;gt; file:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
EXTERNAL_SCRIPT=&amp;quot;/usr/sbin/vznetcfg.custom&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
{{Note|both &amp;lt;code&amp;gt;/etc/vz/vznet.conf&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/usr/sbin/vznetcfg.custom&amp;lt;/code&amp;gt; should be executable files.}}&lt;br /&gt;
&lt;br /&gt;
==== Setting the route VE -&amp;gt; HN ====&lt;br /&gt;
To set up a route from VE to HN the custom script has to get a HN IP (the $VE0_IP variable in the script). There can be different approaches to specify it:&lt;br /&gt;
# Add an entry VE0_IP=&amp;quot;VE0 IP&amp;quot; to the &amp;lt;code&amp;gt;$VEID.conf&amp;lt;/code&amp;gt;&lt;br /&gt;
# Add an entry VE0_IP=&amp;quot;VE0 IP&amp;quot; to the &amp;lt;code&amp;gt;/etc/vz/vz.conf&amp;lt;/code&amp;gt; (the global configuration config file)&lt;br /&gt;
# Implement some smart algorithm to determine the VE0 IP right in the custom network configuration script&lt;br /&gt;
Every variant has its pros and cons, nevertheless for HN static IP configuration variant 2 seems to be acceptable (and the most simple).&lt;br /&gt;
&lt;br /&gt;
== (2) An OVZ Hardware Node has two ethernet interfaces ==&lt;br /&gt;
Assume you have 2 interfaces eth0 and eth1 and want to separate local traffic (10.0.0.0/24) from the external traffic.&lt;br /&gt;
Let's assign eth0 for the external traffic and eth1 for the local one.&lt;br /&gt;
&lt;br /&gt;
If there is no aim to make VE accessible from HN and vice versa, it's enough to replace 'br0' with 'eth1' in the following steps of above configuration:&lt;br /&gt;
* Hardware Node configuration -&amp;gt; [[Using_private_IPs_for_Hardware_Nodes#Assign_the_IP_to_the_bridge|Assign the IP to the bridge]]&lt;br /&gt;
* Hardware Node configuration -&amp;gt; [[Using_private_IPs_for_Hardware_Nodes#Resurrect_the_default_routing|Resurrect the default routing]]&lt;br /&gt;
&lt;br /&gt;
For the VE &amp;lt;-&amp;gt; HN connections availability it's nesessary to set an IP (local) to the 'br0'.&lt;br /&gt;
&lt;br /&gt;
== (3) Putting VEs to different subnetworks ==&lt;br /&gt;
It's enough to set up the correct $VETH_IP_ADDRESS and $VE_DEFAULT_GATEWAY values in the &lt;br /&gt;
[[Using_private_IPs_for_Hardware_Nodes#Edit_the_VE.27s_configuration|above configuration]].&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Virtual network device]]&lt;br /&gt;
* [[Differences between venet and veth]]&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;br /&gt;
[[Category: Networking]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Using_private_IPs_for_Hardware_Nodes&amp;diff=3454</id>
		<title>Using private IPs for Hardware Nodes</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Using_private_IPs_for_Hardware_Nodes&amp;diff=3454"/>
		<updated>2007-09-13T12:36:58Z</updated>

		<summary type="html">&lt;p&gt;Finist: A device newly added to the bridge starts working only after some delay.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article describes how to assign public IPs to VEs running on OVZ Hardware Nodes in case you have a following network topology:&lt;br /&gt;
[[Image:PrivateIPs_fig1.gif|An initial network topology]]&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
This configuration was tested on a RHEL5 OVZ Hardware Node and a VE based on a Fedora Core 5 template.&lt;br /&gt;
Other host OSes and templates might require some configuration changes, please, add corresponding OS specific changes if you've faced any.&amp;lt;br&amp;gt;&lt;br /&gt;
This article assumes the presence of 'brctl', 'ip', 'ifconfig' utils thus might require installation of missed packages like 'bridge-utils'/'iproute'/'net-tools' or others which contain those utilities.&lt;br /&gt;
&lt;br /&gt;
This article assumes you have already [[Quick installation|installed OpenVZ]], prepared the [[OS template cache]](s) and have [[Basic_operations_in_OpenVZ_environment|VE(s) created]]. If not, follow the links to perform the steps needed.&lt;br /&gt;
{{Note|don't assign an IP after VE creation.}}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== (1) An OVZ Hardware Node has the only one ethernet interface ==&lt;br /&gt;
(assume eth0)&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;u&amp;gt;Hardware Node configuration&amp;lt;/u&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
==== Create a bridge device ====&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# brctl addbr br0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Remove an IP from eth0 interface ====&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# ifconfig eth0 0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Add eth0 interface into the bridge ====&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# brctl addif br0 eth0&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
==== Assign the IP to the bridge ====&lt;br /&gt;
(the same that was assigned on eth0 earlier)&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# ifconfig br0 10.0.0.2/24&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Resurrect the default routing ====&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# ip route add default via 10.0.0.1 dev br0&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
{{Note|if you are '''configuring''' the node '''remotely''' you '''must''' prepare a '''script''' with the above commands and run it in background with the redirected output or you'll '''lose the access''' to the Node.}}&lt;br /&gt;
&lt;br /&gt;
==== A script example ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[HN]# cat /tmp/br_add &lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
brctl addbr br0&lt;br /&gt;
ifconfig eth0 0 &lt;br /&gt;
brctl addif br0 eth0 &lt;br /&gt;
ifconfig br0 10.0.0.2/24 &lt;br /&gt;
ip route add default via 10.0.0.1 dev br0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# /tmp/br_add &amp;gt;/dev/null 2&amp;gt;&amp;amp;1 &amp;amp;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== &amp;lt;u&amp;gt;VE configuration&amp;lt;/u&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
==== Start a VE ====&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# vzctl start 101&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Add a [[Virtual_Ethernet_device|veth interface]] to the VE ====&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# vzctl set 101 --netif_add eth0 --save&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Set up an IP to the newly created VE's veth interface ====&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# vzctl exec 101 ifconfig eth0 85.86.87.195/26&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
==== Add the VE's veth interface to the bridge ====&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# brctl addif br0 veth101.0&amp;lt;/pre&amp;gt;&lt;br /&gt;
{{Note|veth interface will work as expected only after it is turned by the bridge into the forwarding state after some delay.&lt;br /&gt;
In 2.6.18 kernel it is 15 sec by default.&lt;br /&gt;
&amp;lt;!-- /sys/class/net/$BR_NAME/bridge/forward_delay in SEC*USER_HZ --&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
==== Set up the default route for the VE ====&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# vzctl exec 101 ip route add default via 85.86.87.193 dev eth0&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
==== (Optional) Add routes VE &amp;lt;-&amp;gt; HN ====&lt;br /&gt;
The configuration above provides following connections available:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
VE X &amp;lt;-&amp;gt; VE Y (where VE X and VE Y can locate on any OVZ HN)&lt;br /&gt;
VE   &amp;lt;-&amp;gt; Internet&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* A VE accessibility from the HN depends on if the local gateway provides NAT or not (probably - yes).&lt;br /&gt;
* A HN accessibility from a VE depends on if the ISP gateway is aware about the local network addresses (most probably - no).&lt;br /&gt;
&lt;br /&gt;
So to provide VE &amp;lt;-&amp;gt; HN accessibility despite the gateways' configuration you can add following route rules:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[HN]# ip route add 85.86.87.195 dev br0&lt;br /&gt;
[HN]# vzctl exec 101 ip route add 10.0.0.2 dev eth0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;u&amp;gt;The resulted OVZ Node configuration&amp;lt;/u&amp;gt; ===&lt;br /&gt;
[[Image:PrivateIPs_fig2.gif|The resulted OVZ Node configuration]]&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;u&amp;gt;Making the configuration persistent&amp;lt;/u&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
==== Set up a bridge on a HN ====&lt;br /&gt;
This can be done by configuring &amp;lt;code&amp;gt;ifcfg-*&amp;lt;/code&amp;gt; files located in &amp;lt;code&amp;gt;/etc/sysconfig/network-scripts/&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Assuming you had a configuration file (e.g. &amp;lt;code&amp;gt;ifcfg-eth0&amp;lt;/code&amp;gt;) like:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
DEVICE=eth0&lt;br /&gt;
ONBOOT=yes&lt;br /&gt;
IPADDR=10.0.0.2&lt;br /&gt;
NETMASK=255.255.255.0&lt;br /&gt;
GATEWAY=10.0.0.1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To make bridge &amp;lt;code&amp;gt;br0&amp;lt;/code&amp;gt; automatically created you can create &amp;lt;code&amp;gt;ifcfg-br0&amp;lt;/code&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
DEVICE=br0&lt;br /&gt;
TYPE=Bridge&lt;br /&gt;
ONBOOT=yes&lt;br /&gt;
IPADDR=10.0.0.2&lt;br /&gt;
NETMASK=255.255.255.0&lt;br /&gt;
GATEWAY=10.0.0.1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and edit &amp;lt;code&amp;gt;ifcfg-eth0&amp;lt;/code&amp;gt; file to add &amp;lt;code&amp;gt;eth0&amp;lt;/code&amp;gt; interface into the bridge &amp;lt;code&amp;gt;br0&amp;lt;/code&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
DEVICE=eth0&lt;br /&gt;
ONBOOT=yes&lt;br /&gt;
BRIDGE=br0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Edit the VE's configuration ====&lt;br /&gt;
Add some parameters to the &amp;lt;code&amp;gt;/etc/vz/conf/$VEID.conf&amp;lt;/code&amp;gt; which will be used during the network configuration:&lt;br /&gt;
* Add/change CONFIG_CUSTOMIZED=&amp;quot;yes&amp;quot; (indicates that a custom script should be run on a VE start)&lt;br /&gt;
* Add VETH_IP_ADDRESS=&amp;quot;&amp;lt;VE IP&amp;gt;/&amp;lt;MASK&amp;gt;&amp;quot; (a VE can have multiple IPs separated by spaces)&lt;br /&gt;
* Add VE_DEFAULT_GATEWAY=&amp;quot;&amp;lt;VE DEFAULT GATEWAY&amp;gt;&amp;quot;&lt;br /&gt;
* Add BRIDGEDEV=&amp;quot;&amp;lt;BRIDGE NAME&amp;gt;&amp;quot; (a bridge name to which the VE veth interface should be added)&lt;br /&gt;
&lt;br /&gt;
An example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Network customization section&lt;br /&gt;
CONFIG_CUSTOMIZED=&amp;quot;yes&amp;quot;&lt;br /&gt;
VETH_IP_ADDRESS=&amp;quot;85.86.87.195/26&amp;quot;&lt;br /&gt;
VE_DEFAULT_GATEWAY=&amp;quot;85.86.87.193&amp;quot;&lt;br /&gt;
BRIDGEDEV=&amp;quot;br0&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Create a custom network configuration script ====&lt;br /&gt;
which should be called each time a VE started (e.g. &amp;lt;code&amp;gt;/usr/sbin/vznetcfg.custom&amp;lt;/code&amp;gt;):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# /usr/sbin/vznetcfg.custom&lt;br /&gt;
# a script to bring up bridged network interfaces (veth's) in a VE&lt;br /&gt;
&lt;br /&gt;
GLOBALCONFIGFILE=/etc/vz/vz.conf&lt;br /&gt;
VECONFIGFILE=/etc/vz/conf/$VEID.conf&lt;br /&gt;
vzctl=/usr/sbin/vzctl&lt;br /&gt;
ip=/sbin/ip&lt;br /&gt;
. $GLOBALCONFIGFILE&lt;br /&gt;
. $VECONFIGFILE&lt;br /&gt;
&lt;br /&gt;
NETIF_OPTIONS=`echo $NETIF | sed 's/,/\n/g'`&lt;br /&gt;
for str in $NETIF_OPTIONS; do \&lt;br /&gt;
        # getting 'ifname' parameter value&lt;br /&gt;
        if [[ &amp;quot;$str&amp;quot; =~ &amp;quot;^ifname=&amp;quot; ]]; then&lt;br /&gt;
                # remove the parameter name from the string (along with '=')&lt;br /&gt;
                VEIFNAME=${str#*=};&lt;br /&gt;
        fi&lt;br /&gt;
        # getting 'host_ifname' parameter value&lt;br /&gt;
        if [[ &amp;quot;$str&amp;quot; =~ &amp;quot;^host_ifname=&amp;quot; ]]; then&lt;br /&gt;
                # remove the parameter name from the string (along with '=')&lt;br /&gt;
                VZHOSTIF=${str#*=};&lt;br /&gt;
        fi&lt;br /&gt;
done&lt;br /&gt;
&lt;br /&gt;
if [ ! -n &amp;quot;$VETH_IP_ADDRESS&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;According to $CONFIGFILE VE$VEID has no veth IPs configured.&amp;quot;&lt;br /&gt;
   exit 1&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
if [ ! -n &amp;quot;$VZHOSTIF&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;According to $CONFIGFILE VE$VEID has no veth interface configured.&amp;quot;&lt;br /&gt;
   exit 1&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
if [ ! -n &amp;quot;$VEIFNAME&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;Corrupted $CONFIGFILE: no 'ifname' defined for host_ifname $VZHOSTIF.&amp;quot;&lt;br /&gt;
   exit 1&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
for IP in $VETH_IP_ADDRESS; do&lt;br /&gt;
   echo &amp;quot;Initializing interface $VZHOSTIF for VE$VEID.&amp;quot;&lt;br /&gt;
   /sbin/ifconfig $VZHOSTIF 0&lt;br /&gt;
done&lt;br /&gt;
&lt;br /&gt;
VEROUTEDEV=$VZHOSTIF&lt;br /&gt;
&lt;br /&gt;
if [ -n &amp;quot;$BRIDGEDEV&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;Adding interface $VZHOSTIF to the bridge $BRIDGEDEV.&amp;quot;&lt;br /&gt;
   VEROUTEDEV=$BRIDGEDEV&lt;br /&gt;
   /usr/sbin/brctl addif $BRIDGEDEV $VZHOSTIF&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
# Up the interface $VEIFNAME link in VE$VEID&lt;br /&gt;
$vzctl exec $VEID $ip link set $VEIFNAME up&lt;br /&gt;
&lt;br /&gt;
for IP in $VETH_IP_ADDRESS; do&lt;br /&gt;
   echo &amp;quot;Adding an IP $IP to the $VEIFNAME for VE$VEID.&amp;quot;&lt;br /&gt;
   $vzctl exec $VEID $ip address add $IP dev $VEIFNAME&lt;br /&gt;
&lt;br /&gt;
   # removing the netmask&lt;br /&gt;
   IP_STRIP=${IP%%/*};&lt;br /&gt;
&lt;br /&gt;
   echo &amp;quot;Adding a route from VE0 to VE$VEID.&amp;quot;&lt;br /&gt;
   $ip route add $IP_STRIP dev $VEROUTEDEV&lt;br /&gt;
done&lt;br /&gt;
&lt;br /&gt;
if [ -n &amp;quot;$VE0_IP&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;Adding a route from VE$VEID to VE0.&amp;quot;&lt;br /&gt;
   $vzctl exec $VEID $ip route add $VE0_IP dev $VEIFNAME&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
if [ -n &amp;quot;$VE_DEFAULT_GATEWAY&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;Setting $VE_DEFAULT_GATEWAY as a default gateway for VE$VEID.&amp;quot;&lt;br /&gt;
   $vzctl exec $VEID \&lt;br /&gt;
        $ip route add default via $VE_DEFAULT_GATEWAY dev $VEIFNAME&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
exit 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Make the script to be run on a VE start ====&lt;br /&gt;
In order to run above script on a VE start create the following &amp;lt;code&amp;gt;/etc/vz/vznet.conf&amp;lt;/code&amp;gt; file:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
EXTERNAL_SCRIPT=&amp;quot;/usr/sbin/vznetcfg.custom&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
{{Note|both &amp;lt;code&amp;gt;/etc/vz/vznet.conf&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/usr/sbin/vznetcfg.custom&amp;lt;/code&amp;gt; should be executable files.}}&lt;br /&gt;
&lt;br /&gt;
==== Setting the route VE -&amp;gt; HN ====&lt;br /&gt;
To set up a route from VE to HN the custom script has to get a HN IP (the $VE0_IP variable in the script). There can be different approaches to specify it:&lt;br /&gt;
# Add an entry VE0_IP=&amp;quot;VE0 IP&amp;quot; to the &amp;lt;code&amp;gt;$VEID.conf&amp;lt;/code&amp;gt;&lt;br /&gt;
# Add an entry VE0_IP=&amp;quot;VE0 IP&amp;quot; to the &amp;lt;code&amp;gt;/etc/vz/vz.conf&amp;lt;/code&amp;gt; (the global configuration config file)&lt;br /&gt;
# Implement some smart algorithm to determine the VE0 IP right in the custom network configuration script&lt;br /&gt;
Every variant has its pros and cons, nevertheless for HN static IP configuration variant 2 seems to be acceptable (and the most simple).&lt;br /&gt;
&lt;br /&gt;
== (2) An OVZ Hardware Node has two ethernet interfaces ==&lt;br /&gt;
Assume you have 2 interfaces eth0 and eth1 and want to separate local traffic (10.0.0.0/24) from the external traffic.&lt;br /&gt;
Let's assign eth0 for the external traffic and eth1 for the local one.&lt;br /&gt;
&lt;br /&gt;
If there is no aim to make VE accessible from HN and vice versa, it's enough to replace 'br0' with 'eth1' in the following steps of above configuration:&lt;br /&gt;
* Hardware Node configuration -&amp;gt; [[Using_private_IPs_for_Hardware_Nodes#Assign_the_IP_to_the_bridge|Assign the IP to the bridge]]&lt;br /&gt;
* Hardware Node configuration -&amp;gt; [[Using_private_IPs_for_Hardware_Nodes#Resurrect_the_default_routing|Resurrect the default routing]]&lt;br /&gt;
&lt;br /&gt;
For the VE &amp;lt;-&amp;gt; HN connections availability it's nesessary to set an IP (local) to the 'br0'.&lt;br /&gt;
&lt;br /&gt;
== (3) Putting VEs to different subnetworks ==&lt;br /&gt;
It's enough to set up the correct $VETH_IP_ADDRESS and $VE_DEFAULT_GATEWAY values in the &lt;br /&gt;
[[Using_private_IPs_for_Hardware_Nodes#Edit_the_VE.27s_configuration|above configuration]].&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Virtual network device]]&lt;br /&gt;
* [[Differences between venet and veth]]&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;br /&gt;
[[Category: Networking]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Virtual_Ethernet_device&amp;diff=3439</id>
		<title>Virtual Ethernet device</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Virtual_Ethernet_device&amp;diff=3439"/>
		<updated>2007-09-06T14:33:50Z</updated>

		<summary type="html">&lt;p&gt;Finist: 'bridged'-&amp;gt;'virtual': confusing definition of veth.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Virtual ethernet device''' is an ethernet-like device which can be used inside a [[VE]]. Unlike&lt;br /&gt;
[[venet]] network device, veth device has a MAC address. Due to this, it can be used in configurations, when veth is bridged to ethX or other device and VE user fully sets up his networking himself, &lt;br /&gt;
including IPs, gateways etc.&lt;br /&gt;
&lt;br /&gt;
Virtual ethernet device consist of two ethernet devices - one in [[VE0]] and another one &lt;br /&gt;
in VE. These devices are connected to each other, so if a packet goes to one&lt;br /&gt;
device it will come out from the other device.&lt;br /&gt;
&lt;br /&gt;
== Virtual ethernet device usage ==&lt;br /&gt;
&lt;br /&gt;
=== Kernel module ===&lt;br /&gt;
First of all, make sure the &amp;lt;code&amp;gt;vzethdev&amp;lt;/code&amp;gt; module is loaded:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# lsmod | grep vzeth&lt;br /&gt;
vzethdev                8224  0&lt;br /&gt;
vzmon                  35164  5 vzethdev,vznetdev,vzrst,vzcpt&lt;br /&gt;
vzdev                   3080  4 vzethdev,vznetdev,vzmon,vzdquota&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In case it is not loaded, load it:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# modprobe vzethdev&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You might want to add the module to &amp;lt;code&amp;gt;/etc/init.d/vz script&amp;lt;/code&amp;gt;, so it will be loaded during startup.&lt;br /&gt;
&lt;br /&gt;
{{Note|since vzctl version 3.0.11, vzethdev is loaded by /etc/init.d/vz}}&lt;br /&gt;
&lt;br /&gt;
=== MAC addresses ===&lt;br /&gt;
In the below commands, you should use random MAC addresses. Do not use MAC addresses of real eth devices, because this can lead to collisions.&lt;br /&gt;
&lt;br /&gt;
MAC addresses must be entered in XX:XX:XX:XX:XX:XX format.&lt;br /&gt;
&lt;br /&gt;
There is a utility script available for generating MAC addresses: http://www.easyvmx.com/software/easymac.sh. It is to be used like this:&lt;br /&gt;
&lt;br /&gt;
 chmod +x easymac.sh&lt;br /&gt;
 ./easymac.sh -R&lt;br /&gt;
&lt;br /&gt;
=== Adding veth to a VE ===&lt;br /&gt;
&lt;br /&gt;
==== syntax vzctl version &amp;lt; 3.0.14 ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vzctl set &amp;lt;VEID&amp;gt; --veth_add &amp;lt;dev_name&amp;gt;,&amp;lt;dev_addr&amp;gt;,&amp;lt;ve_dev_name&amp;gt;,&amp;lt;ve_dev_addr&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here &lt;br /&gt;
* &amp;lt;tt&amp;gt;dev_name&amp;lt;/tt&amp;gt; is the ethernet device name that you are creating on the [[VE0|host system]]&lt;br /&gt;
* &amp;lt;tt&amp;gt;dev_addr&amp;lt;/tt&amp;gt; is its MAC address&lt;br /&gt;
* &amp;lt;tt&amp;gt;ve_dev_name&amp;lt;/tt&amp;gt; is the corresponding ethernet device name you are creating on the VE&lt;br /&gt;
* &amp;lt;tt&amp;gt;ve_dev_addr&amp;lt;/tt&amp;gt; is its MAC address&lt;br /&gt;
&lt;br /&gt;
{{Note| that this option is incremental, so devices are added to already existing ones.}}&lt;br /&gt;
&lt;br /&gt;
NB there are no spaces after the commas&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vzctl set 101 --veth_add veth101.0,00:12:34:56:78:9A,eth0,00:12:34:56:78:9B --save&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
After executing this command &amp;lt;tt&amp;gt;veth&amp;lt;/tt&amp;gt; device will be created for VE 101 and veth configuration will be saved to a VE configuration file.&lt;br /&gt;
Host-side ethernet device will have &amp;lt;tt&amp;gt;veth101.0&amp;lt;/tt&amp;gt; name and &amp;lt;tt&amp;gt;00:12:34:56:78:9A&amp;lt;/tt&amp;gt; MAC address.&lt;br /&gt;
VE-side ethernet device will have &amp;lt;tt&amp;gt;eth0&amp;lt;/tt&amp;gt; name and &amp;lt;tt&amp;gt;00:12:34:56:78:9B&amp;lt;/tt&amp;gt; MAC address.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== syntax vzctl version &amp;gt;= 3.0.14 ====&lt;br /&gt;
&lt;br /&gt;
Read Update infos about [http://openvz.org/news/updates/vzctl-3.0.14-1 vzctl 3.0.14]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vzctl set &amp;lt;VEID&amp;gt; --netif_add &amp;lt;ifname&amp;gt;[,&amp;lt;mac&amp;gt;,&amp;lt;host_ifname&amp;gt;,&amp;lt;host_mac]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here&lt;br /&gt;
* &amp;lt;tt&amp;gt;ifname&amp;lt;/tt&amp;gt; is the ethernet device name in the VE&lt;br /&gt;
* &amp;lt;tt&amp;gt;mac&amp;lt;/tt&amp;gt; is its MAC address in the VE&lt;br /&gt;
* &amp;lt;tt&amp;gt;host_ifname&amp;lt;/tt&amp;gt;  is the ethernet device name on the host ([[VE0]])&lt;br /&gt;
* &amp;lt;tt&amp;gt;host_mac&amp;lt;/tt&amp;gt; is its MAC address on the host ([[VE0]])&lt;br /&gt;
&lt;br /&gt;
{{Note|All parameters except ifname are optional and are automatically generated if not specified.}}&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vzctl set 101 --netif_add eth0,00:12:34:56:78:9A,veth101.0,00:12:34:56:78:9B --save&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Removing veth from a VE ===&lt;br /&gt;
&lt;br /&gt;
==== syntax vzctl version &amp;lt; 3.0.14 ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vzctl set &amp;lt;VEID&amp;gt; --veth_del &amp;lt;dev_name&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Here &amp;lt;tt&amp;gt;dev_name&amp;lt;/tt&amp;gt; is the ethernet device name in the [[VE0|host system]].&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vzctl set 101 --veth_del veth101.0 --save&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
After executing this command veth device with host-side ethernet name veth101.0 will be removed from VE 101 and veth configuration will be updated in VE config file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== syntax vzctl version &amp;gt;= 3.0.14 ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vzctl set &amp;lt;VEID&amp;gt; --netif_del &amp;lt;dev_name&amp;gt;|all&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here&lt;br /&gt;
* &amp;lt;code&amp;gt;dev_name&amp;lt;/code&amp;gt; is the ethernet device name in the [[VE]].&lt;br /&gt;
&lt;br /&gt;
{{Note|If you want to remove all ethernet devices in VE, use &amp;lt;code&amp;gt;all&amp;lt;/code&amp;gt;.}}&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vzctl set 101 --netif_del eth0 --save&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Common configurations with virtual ethernet devices ==&lt;br /&gt;
Module &amp;lt;tt&amp;gt;vzethdev&amp;lt;/tt&amp;gt; must be loaded to operate with veth devices.&lt;br /&gt;
&lt;br /&gt;
=== Simple configuration with virtual ethernet device ===&lt;br /&gt;
&lt;br /&gt;
==== Start a VE ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# vzctl start 101&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Add veth device to VE ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# vzctl set 101 --veth_add veth101.0,00:12:34:56:78:9A,eth0,00:12:34:56:78:9B --save&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Configure devices in VE0 ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# ifconfig veth101.0 0&lt;br /&gt;
[host-node]# echo 1 &amp;gt; /proc/sys/net/ipv4/conf/veth101.0/forwarding&lt;br /&gt;
[host-node]# echo 1 &amp;gt; /proc/sys/net/ipv4/conf/veth101.0/proxy_arp&lt;br /&gt;
[host-node]# echo 1 &amp;gt; /proc/sys/net/ipv4/conf/eth0/forwarding&lt;br /&gt;
[host-node]# echo 1 &amp;gt; /proc/sys/net/ipv4/conf/eth0/proxy_arp&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Configure device in VE ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# vzctl enter 101&lt;br /&gt;
[ve-101]# /sbin/ifconfig eth0 0&lt;br /&gt;
[ve-101]# /sbin/ip addr add 192.168.0.101 dev eth0&lt;br /&gt;
[ve-101]# /sbin/ip route add default dev eth0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Add route in [[VE0]] ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# ip route add 192.168.0.101 dev veth101.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Virtual ethernet device with IPv6 ===&lt;br /&gt;
&lt;br /&gt;
==== Start [[VE]] ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# vzctl start 101&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Add veth device to [[VE]] ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# vzctl set 101 --veth_add veth101.0,00:12:34:56:78:9A,eth0,00:12:34:56:78:9B --save&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Configure devices in [[VE0]] ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# ifconfig veth101.0 0&lt;br /&gt;
[host-node]# echo 1 &amp;gt; /proc/sys/net/ipv6/conf/veth101.0/forwarding&lt;br /&gt;
[host-node]# echo 1 &amp;gt; /proc/sys/net/ipv6/conf/eth0/forwarding&lt;br /&gt;
[host-node]# echo 1 &amp;gt; /proc/sys/net/ipv6/conf/all/forwarding&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Configure device in [[VE]] ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# vzctl enter 101&lt;br /&gt;
[ve-101]# /sbin/ifconfig eth0 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Start router advertisement daemon (radvd) for IPv6 in VE0 ====&lt;br /&gt;
First you need to edit radvd configuration file. Here is a simple example of &amp;lt;tt&amp;gt;/etc/radv.conf&amp;lt;/tt&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
interface veth101.0&lt;br /&gt;
{&lt;br /&gt;
        AdvSendAdvert on;&lt;br /&gt;
        MinRtrAdvInterval 3;&lt;br /&gt;
        MaxRtrAdvInterval 10;&lt;br /&gt;
        AdvHomeAgentFlag off;&lt;br /&gt;
&lt;br /&gt;
        prefix 3ffe:2400:0:0::/64&lt;br /&gt;
        {&lt;br /&gt;
                AdvOnLink on;&lt;br /&gt;
                AdvAutonomous on;&lt;br /&gt;
                AdvRouterAddr off;&lt;br /&gt;
        };&lt;br /&gt;
};&lt;br /&gt;
&lt;br /&gt;
interface eth0&lt;br /&gt;
{&lt;br /&gt;
        AdvSendAdvert on;&lt;br /&gt;
        MinRtrAdvInterval 3;&lt;br /&gt;
        MaxRtrAdvInterval 10;&lt;br /&gt;
        AdvHomeAgentFlag off;&lt;br /&gt;
&lt;br /&gt;
        prefix 3ffe:0302:0011:0002::/64&lt;br /&gt;
        {&lt;br /&gt;
                AdvOnLink on;&lt;br /&gt;
                AdvAutonomous on;&lt;br /&gt;
                AdvRouterAddr off;&lt;br /&gt;
        };&lt;br /&gt;
};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, start radvd:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# /etc/init.d/radvd start&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Add IPv6 addresses to devices in [[VE0]] ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# ip addr add dev veth101.0 3ffe:2400::212:34ff:fe56:789a/64&lt;br /&gt;
[host-node]# ip addr add dev eth0 3ffe:0302:0011:0002:211:22ff:fe33:4455/64&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Virtual ethernet devices can be joined in one bridge ===&lt;br /&gt;
Perform steps 1 - 4 from Simple configuration chapter for several VEs and/or veth devices&lt;br /&gt;
&lt;br /&gt;
==== Create bridge device ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# brctl addbr vzbr0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Add veth devices to bridge ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# brctl addif vzbr0 veth101.0&lt;br /&gt;
...&lt;br /&gt;
[host-node]# brctl addif vzbr0 veth101.n&lt;br /&gt;
[host-node]# brctl addif vzbr0 veth102.0&lt;br /&gt;
...&lt;br /&gt;
...&lt;br /&gt;
[host-node]# brctl addif vzbr0 vethXXX.N&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Configure bridge device ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# ifconfig vzbr0 0&lt;br /&gt;
[host-node]# echo 1 &amp;gt; /proc/sys/net/ipv4/conf/vzbr0/forwarding&lt;br /&gt;
[host-node]# echo 1 &amp;gt; /proc/sys/net/ipv4/conf/vzbr0/proxy_arp&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Add routes in [[VE0]] ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[host-node]# ip route add 192.168.101.1 dev vzbr0&lt;br /&gt;
...&lt;br /&gt;
[host-node]# ip route add 192.168.101.n dev vzbr0&lt;br /&gt;
[host-node]# ip route add 192.168.102.1 dev vzbr0&lt;br /&gt;
...&lt;br /&gt;
...&lt;br /&gt;
[host-node]# ip route add 192.168.XXX.N dev vzbr0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Thus you'll have more convinient configuration, i.e. all routes to VEs will be through this bridge and VEs can communicate with each other even without these routes.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Making a veth-device persistent ===&lt;br /&gt;
&lt;br /&gt;
At the moment, it is not possible to have the commands needed for a persistent veth being made automatically be vzctl. A  bugreport ( http://bugzilla.openvz.org/show_bug.cgi?id=301 ) has already been made. Until then, here's a way to make the above steps persistent.&lt;br /&gt;
&lt;br /&gt;
1. First, edit the VE's configuration to specify what the veth's IP address(es) should be, and to indicate that a custom script should be run when starting up a VE.&lt;br /&gt;
* Open up /etc/vz/conf/VEID.conf&lt;br /&gt;
* Comment out any IP_ADDRESS entries to prevent a VENET-device from being created in the VE&lt;br /&gt;
* Add or change the entry CONFIG_CUSTOMIZED=&amp;quot;yes&amp;quot;&lt;br /&gt;
* Add an entry VETH_IP_ADDRESS=&amp;quot;&amp;lt;VE IP&amp;gt;&amp;quot; The VE IP can have multiple IPs, separated by spaces&lt;br /&gt;
&lt;br /&gt;
2. Now to create that &amp;quot;custom script&amp;quot;. The following helper script will check the configuration file for IP addresses and for the veth interface, and configure the IP routing accordingly. Create the script /usr/sbin/vznetaddroute to have the following, and then &amp;lt;code&amp;gt;chmod 0500 /usr/sbin/vznetaddroute&amp;lt;/code&amp;gt; to make it executable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# /usr/sbin/vznetaddroute&lt;br /&gt;
# a script to bring up virtual network interfaces (veth's) in a VE&lt;br /&gt;
&lt;br /&gt;
CONFIGFILE=/etc/vz/conf/$VEID.conf&lt;br /&gt;
. $CONFIGFILE&lt;br /&gt;
VZHOSTIF=`echo $NETIF |sed 's/^.*host_ifname=\(.*\),.*$/\1/g'`&lt;br /&gt;
&lt;br /&gt;
if [ ! -n &amp;quot;$VETH_IP_ADDRESS&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;According to $CONFIGFILE VE$VEID has no veth IPs configured.&amp;quot;&lt;br /&gt;
   exit 1&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
if [ ! -n &amp;quot;$VZHOSTIF&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;According to $CONFIGFILE VE$VEID has no veth interface configured.&amp;quot;&lt;br /&gt;
   exit 1&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
for IP in $VETH_IP_ADDRESS; do&lt;br /&gt;
   echo &amp;quot;Adding interface $VZHOSTIF and route $IP for VE$VEID to VE0&amp;quot;&lt;br /&gt;
   /sbin/ifconfig $VZHOSTIF 0&lt;br /&gt;
   echo 1 &amp;gt; /proc/sys/net/ipv4/conf/$VZHOSTIF/proxy_arp&lt;br /&gt;
   echo 1 &amp;gt; /proc/sys/net/ipv4/conf/$VZHOSTIF/forwarding&lt;br /&gt;
   /sbin/ip route add $IP dev $VZHOSTIF&lt;br /&gt;
done&lt;br /&gt;
&lt;br /&gt;
exit 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. Now create /etc/vz/vznet.conf containing the following. This is what defines the &amp;quot;custom script&amp;quot; as being the vznetaddroute which you just created.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
EXTERNAL_SCRIPT=&amp;quot;/usr/sbin/vznetaddroute&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Of course, the VE's operating system will need to be configured with those IP address(es) as well. Consult the manual for your VE's OS for details.&lt;br /&gt;
&lt;br /&gt;
That's it! At this point, when you restart the VE you should see a new line in the output, indicating that the interface is being configured and a new route being added. And you should be able to ping the host, and to enter the VE and use the network.&lt;br /&gt;
&lt;br /&gt;
=== Virtual ethernet devices + VLAN ===&lt;br /&gt;
This configuration can be done by adding vlan device to the previous configuration.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Virtual network device]]&lt;br /&gt;
* [[Differences between venet and veth]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [http://www.tldp.org/HOWTO/Linux+IPv6-HOWTO/hints-daemons-radvd.html Linux IPv6 HOWTO, a chapter about radvd]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Networking]]&lt;br /&gt;
[[Category: HOWTO]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Using_private_IPs_for_Hardware_Nodes&amp;diff=3417</id>
		<title>Using private IPs for Hardware Nodes</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Using_private_IPs_for_Hardware_Nodes&amp;diff=3417"/>
		<updated>2007-08-31T14:36:45Z</updated>

		<summary type="html">&lt;p&gt;Finist: misprint eth0 -&amp;gt; eth1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article describes how to assign public IPs to VEs running on OVZ Hardware Nodes in case you have a following network topology:&lt;br /&gt;
[[Image:PrivateIPs_fig1.gif|An initial network topology]]&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
This configuration was tested on a RHEL5 OVZ Hardware Node and a VE based on a Fedora Core 5 template.&lt;br /&gt;
Other host OSes and templates might require some configuration changes, please, add corresponding OS specific changes if you've faced any.&amp;lt;br&amp;gt;&lt;br /&gt;
This article assumes the presence of 'brctl', 'ip', 'ifconfig' utils thus might require installation of missed packages like 'bridge-utils'/'iproute'/'net-tools' or others which contain those utilities.&lt;br /&gt;
&lt;br /&gt;
This article assumes you have already [[Quick installation|installed OpenVZ]], prepared the [[OS template cache]](s) and have [[Basic_operations_in_OpenVZ_environment|VE(s) created]]. If not, follow the links to perform the steps needed.&lt;br /&gt;
{{Note|don't assign an IP after VE creation.}}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== (1) An OVZ Hardware Node has the only one ethernet interface ==&lt;br /&gt;
(assume eth0)&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;u&amp;gt;Hardware Node configuration&amp;lt;/u&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
==== Create a bridge device ====&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# brctl addbr br0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Remove an IP from eth0 interface ====&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# ifconfig eth0 0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Add eth0 interface into the bridge ====&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# brctl addif br0 eth0&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
==== Assign the IP to the bridge ====&lt;br /&gt;
(the same that was assigned on eth0 earlier)&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# ifconfig br0 10.0.0.2/24&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Resurrect the default routing ====&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# ip route add default via 10.0.0.1 dev br0&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
{{Note|if you are '''configuring''' the node '''remotely''' you '''must''' prepare a '''script''' with the above commands and run it in background with the redirected output or you'll '''lose the access''' to the Node.}}&lt;br /&gt;
&lt;br /&gt;
==== A script example ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[HN]# cat /tmp/br_add &lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
brctl addbr br0&lt;br /&gt;
ifconfig eth0 0 &lt;br /&gt;
brctl addif br0 eth0 &lt;br /&gt;
ifconfig br0 10.0.0.2/24 &lt;br /&gt;
ip route add default via 10.0.0.1 dev br0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# /tmp/br_add &amp;gt;/dev/null 2&amp;gt;&amp;amp;1 &amp;amp;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== &amp;lt;u&amp;gt;VE configuration&amp;lt;/u&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
==== Start a VE ====&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# vzctl start 101&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Add a [[Virtual_Ethernet_device|veth interface]] to the VE ====&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# vzctl set 101 --netif_add eth0 --save&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Set up an IP to the newly created VE's veth interface ====&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# vzctl exec 101 ifconfig eth0 85.86.87.195/26&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
==== Add the VE's veth interface to the bridge ====&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# brctl addif br0 veth101.0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Set up the default route for the VE ====&lt;br /&gt;
&amp;lt;pre&amp;gt;[HN]# vzctl exec 101 ip route add default via 85.86.87.193 dev eth0&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
==== (Optional) Add routes VE &amp;lt;-&amp;gt; HN ====&lt;br /&gt;
The configuration above provides following connections available:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
VE X &amp;lt;-&amp;gt; VE Y (where VE X and VE Y can locate on any OVZ HN)&lt;br /&gt;
VE   &amp;lt;-&amp;gt; Internet&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* A VE accessibility from the HN depends on if the local gateway provides NAT or not (probably - yes).&lt;br /&gt;
* A HN accessibility from a VE depends on if the ISP gateway is aware about the local network addresses (most probably - no).&lt;br /&gt;
&lt;br /&gt;
So to provide VE &amp;lt;-&amp;gt; HN accessibility despite the gateways' configuration you can add following route rules:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[HN]# ip route add 85.86.87.195 dev br0&lt;br /&gt;
[HN]# vzctl exec 101 ip route add 10.0.0.2 dev eth0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;u&amp;gt;The resulted OVZ Node configuration&amp;lt;/u&amp;gt; ===&lt;br /&gt;
[[Image:PrivateIPs_fig2.gif|The resulted OVZ Node configuration]]&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;u&amp;gt;Making the configuration persistent&amp;lt;/u&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
==== Set up a bridge on a HN ====&lt;br /&gt;
This can be done by configuring &amp;lt;code&amp;gt;ifcfg-*&amp;lt;/code&amp;gt; files located in &amp;lt;code&amp;gt;/etc/sysconfig/network-scripts/&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Assuming you had a configuration file (e.g. &amp;lt;code&amp;gt;ifcfg-eth0&amp;lt;/code&amp;gt;) like:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
DEVICE=eth0&lt;br /&gt;
ONBOOT=yes&lt;br /&gt;
IPADDR=10.0.0.2&lt;br /&gt;
NETMASK=255.255.255.0&lt;br /&gt;
GATEWAY=10.0.0.1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To make bridge &amp;lt;code&amp;gt;br0&amp;lt;/code&amp;gt; automatically created you can create &amp;lt;code&amp;gt;ifcfg-br0&amp;lt;/code&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
DEVICE=br0&lt;br /&gt;
TYPE=Bridge&lt;br /&gt;
ONBOOT=yes&lt;br /&gt;
IPADDR=10.0.0.2&lt;br /&gt;
NETMASK=255.255.255.0&lt;br /&gt;
GATEWAY=10.0.0.1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and edit &amp;lt;code&amp;gt;ifcfg-eth0&amp;lt;/code&amp;gt; file to add &amp;lt;code&amp;gt;eth0&amp;lt;/code&amp;gt; interface into the bridge &amp;lt;code&amp;gt;br0&amp;lt;/code&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
DEVICE=eth0&lt;br /&gt;
ONBOOT=yes&lt;br /&gt;
BRIDGE=br0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Edit the VE's configuration ====&lt;br /&gt;
Add some parameters to the &amp;lt;code&amp;gt;/etc/vz/conf/$VEID.conf&amp;lt;/code&amp;gt; which will be used during the network configuration:&lt;br /&gt;
* Add/change CONFIG_CUSTOMIZED=&amp;quot;yes&amp;quot; (indicates that a custom script should be run on a VE start)&lt;br /&gt;
* Add VETH_IP_ADDRESS=&amp;quot;&amp;lt;VE IP&amp;gt;/&amp;lt;MASK&amp;gt;&amp;quot; (a VE can have multiple IPs separated by spaces)&lt;br /&gt;
* Add VE_DEFAULT_GATEWAY=&amp;quot;&amp;lt;VE DEFAULT GATEWAY&amp;gt;&amp;quot;&lt;br /&gt;
* Add BRIDGEDEV=&amp;quot;&amp;lt;BRIDGE NAME&amp;gt;&amp;quot; (a bridge name to which the VE veth interface should be added)&lt;br /&gt;
&lt;br /&gt;
An example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Network customization section&lt;br /&gt;
CONFIG_CUSTOMIZED=&amp;quot;yes&amp;quot;&lt;br /&gt;
VETH_IP_ADDRESS=&amp;quot;85.86.87.195/26&amp;quot;&lt;br /&gt;
VE_DEFAULT_GATEWAY=&amp;quot;85.86.87.193&amp;quot;&lt;br /&gt;
BRIDGEDEV=&amp;quot;br0&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Create a custom network configuration script ====&lt;br /&gt;
which should be called each time a VE started (e.g. &amp;lt;code&amp;gt;/usr/sbin/vznetcfg.custom&amp;lt;/code&amp;gt;):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# /usr/sbin/vznetcfg.custom&lt;br /&gt;
# a script to bring up bridged network interfaces (veth's) in a VE&lt;br /&gt;
&lt;br /&gt;
GLOBALCONFIGFILE=/etc/vz/vz.conf&lt;br /&gt;
VECONFIGFILE=/etc/vz/conf/$VEID.conf&lt;br /&gt;
vzctl=/usr/sbin/vzctl&lt;br /&gt;
ip=/sbin/ip&lt;br /&gt;
. $GLOBALCONFIGFILE&lt;br /&gt;
. $VECONFIGFILE&lt;br /&gt;
&lt;br /&gt;
NETIF_OPTIONS=`echo $NETIF | sed 's/,/\n/g'`&lt;br /&gt;
for str in $NETIF_OPTIONS; do \&lt;br /&gt;
        # getting 'ifname' parameter value&lt;br /&gt;
        if [[ &amp;quot;$str&amp;quot; =~ &amp;quot;^ifname=&amp;quot; ]]; then&lt;br /&gt;
                # remove the parameter name from the string (along with '=')&lt;br /&gt;
                VEIFNAME=${str#*=};&lt;br /&gt;
        fi&lt;br /&gt;
        # getting 'host_ifname' parameter value&lt;br /&gt;
        if [[ &amp;quot;$str&amp;quot; =~ &amp;quot;^host_ifname=&amp;quot; ]]; then&lt;br /&gt;
                # remove the parameter name from the string (along with '=')&lt;br /&gt;
                VZHOSTIF=${str#*=};&lt;br /&gt;
        fi&lt;br /&gt;
done&lt;br /&gt;
&lt;br /&gt;
if [ ! -n &amp;quot;$VETH_IP_ADDRESS&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;According to $CONFIGFILE VE$VEID has no veth IPs configured.&amp;quot;&lt;br /&gt;
   exit 1&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
if [ ! -n &amp;quot;$VZHOSTIF&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;According to $CONFIGFILE VE$VEID has no veth interface configured.&amp;quot;&lt;br /&gt;
   exit 1&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
if [ ! -n &amp;quot;$VEIFNAME&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;Corrupted $CONFIGFILE: no 'ifname' defined for host_ifname $VZHOSTIF.&amp;quot;&lt;br /&gt;
   exit 1&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
for IP in $VETH_IP_ADDRESS; do&lt;br /&gt;
   echo &amp;quot;Initializing interface $VZHOSTIF for VE$VEID.&amp;quot;&lt;br /&gt;
   /sbin/ifconfig $VZHOSTIF 0&lt;br /&gt;
done&lt;br /&gt;
&lt;br /&gt;
VEROUTEDEV=$VZHOSTIF&lt;br /&gt;
&lt;br /&gt;
if [ -n &amp;quot;$BRIDGEDEV&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;Adding interface $VZHOSTIF to the bridge $BRIDGEDEV.&amp;quot;&lt;br /&gt;
   VEROUTEDEV=$BRIDGEDEV&lt;br /&gt;
   /usr/sbin/brctl addif $BRIDGEDEV $VZHOSTIF&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
# Up the interface $VEIFNAME link in VE$VEID&lt;br /&gt;
$vzctl exec $VEID $ip link set $VEIFNAME up&lt;br /&gt;
&lt;br /&gt;
for IP in $VETH_IP_ADDRESS; do&lt;br /&gt;
   echo &amp;quot;Adding an IP $IP to the $VEIFNAME for VE$VEID.&amp;quot;&lt;br /&gt;
   $vzctl exec $VEID $ip address add $IP dev $VEIFNAME&lt;br /&gt;
&lt;br /&gt;
   # removing the netmask&lt;br /&gt;
   IP_STRIP=${IP%%/*};&lt;br /&gt;
&lt;br /&gt;
   echo &amp;quot;Adding a route from VE0 to VE$VEID.&amp;quot;&lt;br /&gt;
   $ip route add $IP_STRIP dev $VEROUTEDEV&lt;br /&gt;
done&lt;br /&gt;
&lt;br /&gt;
if [ -n &amp;quot;$VE0_IP&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;Adding a route from VE$VEID to VE0.&amp;quot;&lt;br /&gt;
   $vzctl exec $VEID $ip route add $VE0_IP dev $VEIFNAME&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
if [ -n &amp;quot;$VE_DEFAULT_GATEWAY&amp;quot; ]; then&lt;br /&gt;
   echo &amp;quot;Setting $VE_DEFAULT_GATEWAY as a default gateway for VE$VEID.&amp;quot;&lt;br /&gt;
   $vzctl exec $VEID \&lt;br /&gt;
        $ip route add default via $VE_DEFAULT_GATEWAY dev $VEIFNAME&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
exit 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Make the script to be run on a VE start ====&lt;br /&gt;
In order to run above script on a VE start create the following &amp;lt;code&amp;gt;/etc/vz/vznet.conf&amp;lt;/code&amp;gt; file:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
EXTERNAL_SCRIPT=&amp;quot;/usr/sbin/vznetcfg.custom&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
{{Note|both &amp;lt;code&amp;gt;/etc/vz/vznet.conf&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/usr/sbin/vznetcfg.custom&amp;lt;/code&amp;gt; should be executable files.}}&lt;br /&gt;
&lt;br /&gt;
==== Setting the route VE -&amp;gt; HN ====&lt;br /&gt;
To set up a route from VE to HN the custom script has to get a HN IP (the $VE0_IP variable in the script). There can be different approaches to specify it:&lt;br /&gt;
# Add an entry VE0_IP=&amp;quot;VE0 IP&amp;quot; to the &amp;lt;code&amp;gt;$VEID.conf&amp;lt;/code&amp;gt;&lt;br /&gt;
# Add an entry VE0_IP=&amp;quot;VE0 IP&amp;quot; to the &amp;lt;code&amp;gt;/etc/vz/vz.conf&amp;lt;/code&amp;gt; (the global configuration config file)&lt;br /&gt;
# Implement some smart algorithm to determine the VE0 IP right in the custom network configuration script&lt;br /&gt;
Every variant has its pros and cons, nevertheless for HN static IP configuration variant 2 seems to be acceptable (and the most simple).&lt;br /&gt;
&lt;br /&gt;
== (2) An OVZ Hardware Node has two ethernet interfaces ==&lt;br /&gt;
Assume you have 2 interfaces eth0 and eth1 and want to separate local traffic (10.0.0.0/24) from the external traffic.&lt;br /&gt;
Let's assign eth0 for the external traffic and eth1 for the local one.&lt;br /&gt;
&lt;br /&gt;
If there is no aim to make VE accessible from HN and vice versa, it's enough to replace 'br0' with 'eth1' in the following steps of above configuration:&lt;br /&gt;
* Hardware Node configuration -&amp;gt; [[Using_private_IPs_for_Hardware_Nodes#Assign_the_IP_to_the_bridge|Assign the IP to the bridge]]&lt;br /&gt;
* Hardware Node configuration -&amp;gt; [[Using_private_IPs_for_Hardware_Nodes#Resurrect_the_default_routing|Resurrect the default routing]]&lt;br /&gt;
&lt;br /&gt;
For the VE &amp;lt;-&amp;gt; HN connections availability it's nesessary to set an IP (local) to the 'br0'.&lt;br /&gt;
&lt;br /&gt;
== (3) Putting VEs to different subnetworks ==&lt;br /&gt;
It's enough to set up the correct $VETH_IP_ADDRESS and $VE_DEFAULT_GATEWAY values in the &lt;br /&gt;
[[Using_private_IPs_for_Hardware_Nodes#Edit_the_VE.27s_configuration|above configuration]].&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Virtual network device]]&lt;br /&gt;
* [[Differences between venet and veth]]&lt;br /&gt;
&lt;br /&gt;
[[Category: HOWTO]]&lt;br /&gt;
[[Category: Networking]]&lt;/div&gt;</summary>
		<author><name>Finist</name></author>
		
	</entry>
</feed>