<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.openvz.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Toutoune25</id>
	<title>OpenVZ Virtuozzo Containers Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.openvz.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Toutoune25"/>
	<link rel="alternate" type="text/html" href="https://wiki.openvz.org/Special:Contributions/Toutoune25"/>
	<updated>2026-05-02T12:41:09Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.31.1</generator>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Migration_from_Linux-VServer_to_OpenVZ&amp;diff=5843</id>
		<title>Migration from Linux-VServer to OpenVZ</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Migration_from_Linux-VServer_to_OpenVZ&amp;diff=5843"/>
		<updated>2008-04-28T10:46:13Z</updated>

		<summary type="html">&lt;p&gt;Toutoune25: wikif&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Roughstub}}&lt;br /&gt;
&lt;br /&gt;
Current document describes the migration from Linux-VServer based virtualization solution to OpenVZ.&lt;br /&gt;
&lt;br /&gt;
Description of challenge:&lt;br /&gt;
&lt;br /&gt;
The challenge is migration from Linux-Vserver to OpenVZ by booting the OpenVZ kernel and updating the existing configs of &lt;br /&gt;
utility level in purpose to make the existing guest OSes work over OpenVZ kernel.&lt;br /&gt;
&lt;br /&gt;
Details of migration process. Step by step:&lt;br /&gt;
&lt;br /&gt;
1. Initial conditions: the following example of Linux-VServer based solution was used for the experiment:&lt;br /&gt;
&lt;br /&gt;
* Kernel linux-2.6.17.13 was patched by the patch-2.6.17.13-vs2.0.2.1.diff and rebuild;&lt;br /&gt;
* Util-vserver-0.30.211 tools were used for creating containers;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
  # vserver-info&lt;br /&gt;
  Versions:&lt;br /&gt;
  Kernel: 2.6.17.13-vs2.0.2.1&lt;br /&gt;
  VS-API: 0x00020002&lt;br /&gt;
  util-vserver: 0.30.211; Dec  5 2006, 17:10:21&lt;br /&gt;
 &lt;br /&gt;
  Features:&lt;br /&gt;
  CC: gcc, gcc (GCC) 3.4.3 20041212 (Red Hat 3.4.3-9.EL4)&lt;br /&gt;
  CXX: g++, g++ (GCC) 3.4.3 20041212 (Red Hat 3.4.3-9.EL4)&lt;br /&gt;
  CPPFLAGS: ''&lt;br /&gt;
  CFLAGS: '-g -O2 -std=c99 -Wall -pedantic -W -funit-at-a-time'&lt;br /&gt;
  CXXFLAGS: '-g -O2 -ansi -Wall -pedantic -W -fmessage-length=0 -funit-at-a-time'&lt;br /&gt;
  build/host: i686-pc-linux-gnu/i686-pc-linux-gnu&lt;br /&gt;
  Use dietlibc: yes&lt;br /&gt;
  Build C++ programs: yes&lt;br /&gt;
  Build C99 programs: yes&lt;br /&gt;
  Available APIs: v13,net&lt;br /&gt;
  ext2fs Source: kernel&lt;br /&gt;
  syscall(2) invocation: alternative&lt;br /&gt;
  vserver(2) syscall#: 273/glibc&lt;br /&gt;
 &lt;br /&gt;
  Paths:&lt;br /&gt;
  prefix: /usr/local&lt;br /&gt;
  sysconf-Directory: ${prefix}/etc&lt;br /&gt;
  cfg-Directory: ${prefix}/etc/vservers&lt;br /&gt;
  initrd-Directory: $(sysconfdir)/init.d&lt;br /&gt;
  pkgstate-Directory: ${prefix}/var/run/vservers&lt;br /&gt;
  vserver-Rootdir: /vservers&lt;br /&gt;
  #&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
VServer v345 was built using vserver vX build utility and populated by using the tarballed template of Fedora Core 4.&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
  # vserver v345 start&lt;br /&gt;
  Starting system logger:                                    [  OK  ]&lt;br /&gt;
  Initializing random number generator:                      [  OK  ]&lt;br /&gt;
  Starting crond: l:                                         [  OK  ]&lt;br /&gt;
  Starting atd:                                              [  OK  ]&lt;br /&gt;
  # vserver v345 enter&lt;br /&gt;
  [/]# ls -l&lt;br /&gt;
  total 44&lt;br /&gt;
  drwxr-xr-x    2 root     root         4096 Oct 26  2004 bin&lt;br /&gt;
  drwxr-xr-x    3 root     root         4096 Dec  8 17:16 dev&lt;br /&gt;
  drwxr-xr-x   27 root     root         4096 Dec  8 15:21 etc&lt;br /&gt;
  -rw-r--r--    1 root     root            0 Dec  8 15:33 halt&lt;br /&gt;
  drwxr-xr-x    2 root     root         4096 Jan 24  2003 home&lt;br /&gt;
  drwxr-xr-x    7 root     root         4096 Oct 26  2004 lib&lt;br /&gt;
  drwxr-xr-x    2 root     root         4096 Jan 24  2003 mnt&lt;br /&gt;
  drwxr-xr-x    3 root     root         4096 Oct 26  2004 opt&lt;br /&gt;
  -rw-r--r--    1 root     root            0 Dec  7 20:17 poweroff&lt;br /&gt;
  dr-xr-xr-x   80 root     root            0 Dec  8 11:38 proc&lt;br /&gt;
  drwxr-x---    2 root     root         4096 Dec  7 20:17 root&lt;br /&gt;
  drwxr-xr-x    2 root     root         4096 Oct 26  2004 sbin&lt;br /&gt;
  drwxrwxrwt    2 root     root           40 Dec  8 17:16 tmp&lt;br /&gt;
  drwxr-xr-x   15 root     root         4096 Jul 27  2004 usr&lt;br /&gt;
  drwxr-xr-x   17 root     root         4096 Oct 26  2004 var&lt;br /&gt;
  [/]# sh&lt;br /&gt;
  sh-2.05b#&lt;br /&gt;
  .........&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As a result we obtain running virtual environment v345:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
  # vserver-stat&lt;br /&gt;
 &lt;br /&gt;
  CTX   PROC    VSZ    RSS  userTIME   sysTIME    UPTIME NAME&lt;br /&gt;
  0       51  90.9M  26.3M   0m58s75   2m42s57  33m45s93 root server&lt;br /&gt;
  49153    4  10.2M   2.8M   0m00s00   0m00s11  21m45s42 v345&lt;br /&gt;
 &lt;br /&gt;
  # &lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. Starting migration to OpenVZ: downloading and installing the stable OpenVZ kernel.&lt;br /&gt;
&lt;br /&gt;
Install the OpenVZ kernel, as described in [[quick installation]]. &lt;br /&gt;
&lt;br /&gt;
After the kernel is installed, reboot the machine. After rebooting and logging in you will see the following reply on vserver-stat call:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
  # vserver-stat&lt;br /&gt;
  can not change context: migrate kernel feature missing and 'compat' API disabled: Function not implemented&lt;br /&gt;
  #&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It is a natural thing that now virtual environment v345 is unavailable. The following steps will be devoted to making it&lt;br /&gt;
work over OpenVZ kernel.&lt;br /&gt;
&lt;br /&gt;
3. Downloading and installing vzctl package&lt;br /&gt;
&lt;br /&gt;
OpenVZ solution requires installing a set of tools: vzctl and vzquota packages. Download and install it, as described in [[quick installation]].&lt;br /&gt;
&lt;br /&gt;
If rpm complains about unresolved dependencies, you'll have to satisfy them first, then repeat the installation.&lt;br /&gt;
Then launch the OpenVZ:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
  # /sbin/service vz start&lt;br /&gt;
  Starting OpenVZ:                                           [  OK  ]&lt;br /&gt;
  Bringing up interface venet0:                              [  OK  ]&lt;br /&gt;
  Configuring interface venet0:                              [  OK  ]&lt;br /&gt;
  #&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Currently vzlist utility is unable to find any containers:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
  # vzlist&lt;br /&gt;
  Containers not found&lt;br /&gt;
  #&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Updating different configurations in purpose to make existing templates work&lt;br /&gt;
&lt;br /&gt;
Move the existing templates of guest OSs to the right place:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
  # cd /vz&lt;br /&gt;
  # mkdir private&lt;br /&gt;
  # mkdir 345&lt;br /&gt;
  # mv /vservers/v345 /vz/private/345&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now it is time for creating configuration files for OpenVZ container. Use the basic sample&lt;br /&gt;
configuration presented in /etc/sysconfig/vz-scripts/ve-vps.basic.conf-sample file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
  # cd /etc/sysconfig/vz-scripts&lt;br /&gt;
  # cp ve-vps.basic.conf-sample 345.conf&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Update the ON_BOOT string in 345.conf file by typing:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
  .....&lt;br /&gt;
  ONBOOT=&amp;quot;yes&amp;quot;&lt;br /&gt;
  .....&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
to make it boot on node restart, and add a couple of strings related to the&lt;br /&gt;
particular container 345:&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
  .....&lt;br /&gt;
  VE_ROOT=&amp;quot;/vz/root/345&amp;quot;&lt;br /&gt;
  VE_PRIVATE=&amp;quot;/vz/private/345&amp;quot;&lt;br /&gt;
  ORIGIN_SAMPLE=&amp;quot;vps.basic&amp;quot;&lt;br /&gt;
  HOSTNAME=&amp;quot;test345.my.org&amp;quot;&lt;br /&gt;
  IP_ADDRESS=&amp;quot;192.168.0.145&amp;quot;&lt;br /&gt;
  .....&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And reboot the machine:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
  # reboot&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
5. Testing how the guest OSs successfully work over OpenVZ. Reference to Users Guide of OpenVZ (vzctl).&lt;br /&gt;
&lt;br /&gt;
After rebooting you will be able to see running container 345 that have been&lt;br /&gt;
migrated from vserver:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
  # vzlist -a&lt;br /&gt;
  CTID      NPROC  STATUS  IP_ADDR         HOSTNAME&lt;br /&gt;
  345          5   running 192.168.0.145   test345.my.org&lt;br /&gt;
  #&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And run commands on it:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
  # vzctl exec 345 ls -l&lt;br /&gt;
  total 48&lt;br /&gt;
  drwxr-xr-x    2 root     root         4096 Oct 26  2004 bin&lt;br /&gt;
  drwxr-xr-x    3 root     root         4096 Dec 11 12:42 dev&lt;br /&gt;
  drwxr-xr-x   27 root     root         4096 Dec 11 12:44 etc&lt;br /&gt;
  -rw-r--r--    1 root     root            0 Dec 11 12:13 fastboot&lt;br /&gt;
  -rw-r--r--    1 root     root            0 Dec  8 15:33 halt&lt;br /&gt;
  drwxr-xr-x    2 root     root         4096 Jan 24  2003 home&lt;br /&gt;
  drwxr-xr-x    7 root     root         4096 Oct 26  2004 lib&lt;br /&gt;
  drwxr-xr-x    2 root     root         4096 Jan 24  2003 mnt&lt;br /&gt;
  drwxr-xr-x    3 root     root         4096 Oct 26  2004 opt&lt;br /&gt;
  -rw-r--r--    1 root     root            0 Dec  7 20:17 poweroff&lt;br /&gt;
  dr-xr-xr-x   70 root     root            0 Dec 11 12:42 proc&lt;br /&gt;
  drwxr-x---    2 root     root         4096 Dec  7 20:17 root&lt;br /&gt;
  drwxr-xr-x    2 root     root         4096 Dec 11 12:13 sbin&lt;br /&gt;
  drwxrwxrwt    2 root     root         4096 Dec  8 12:40 tmp&lt;br /&gt;
  drwxr-xr-x   15 root     root         4096 Jul 27  2004 usr&lt;br /&gt;
  drwxr-xr-x   17 root     root         4096 Oct 26  2004 var&lt;br /&gt;
  #&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Potential issues:&lt;br /&gt;
This work has in progress status. Some issues may take place with tuning the network for containers. To be continued.&lt;br /&gt;
&lt;br /&gt;
[[Category:HOWTO]]&lt;/div&gt;</summary>
		<author><name>Toutoune25</name></author>
		
	</entry>
	<entry>
		<id>https://wiki.openvz.org/index.php?title=Features&amp;diff=5841</id>
		<title>Features</title>
		<link rel="alternate" type="text/html" href="https://wiki.openvz.org/index.php?title=Features&amp;diff=5841"/>
		<updated>2008-04-28T08:32:34Z</updated>

		<summary type="html">&lt;p&gt;Toutoune25: typo&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The architecture of OpenVZ is different from the traditional virtual machines architecture because it always runs the same OS kernel as the host system (while still allowing multiple Linux distributions in individual [[container]]s). This single-kernel implementation technology enables running [[container]]s with a near-zero overhead. Thus, OpenVZ offer an order of magnitude higher efficiency and manageability than traditional virtualization technologies.&lt;br /&gt;
&lt;br /&gt;
== OS Virtualization ==&lt;br /&gt;
From the point of view of applications and [[container]] users, each container is an independent system. This independence is provided by a virtualization layer in the kernel of the host OS. Note that only a negligible part of the CPU resources is spent on virtualization (around 1-2%). The main features of the virtualization layer implemented in OpenVZ are the following:&lt;br /&gt;
&lt;br /&gt;
* A [[container]] looks and behaves like a regular Linux system. It has standard startup scripts; software from vendors can run inside a container without OpenVZ-specific modifications or adjustment;&lt;br /&gt;
* A user can change any configuration file and install additional software;&lt;br /&gt;
* [[Containers]] are completely isolated from each other (file system, processes, Inter Process Communication (IPC), sysctl variables);&lt;br /&gt;
* Processes belonging to a container are scheduled for execution on all available CPUs. Consequently, [[CT]]s are not bound to only one CPU and can use all available CPU power.&lt;br /&gt;
&lt;br /&gt;
== Network virtualization ==&lt;br /&gt;
&lt;br /&gt;
The OpenVZ network virtualization layer is designed to isolate [[CT]]s from each other and from the physical network:&lt;br /&gt;
&lt;br /&gt;
* Each [[CT]] has its own IP address; multiple IP addresses per CT are allowed;&lt;br /&gt;
* Network traffic of a CT is isolated from the other CTs. In other words, containers are protected from each other in the way that makes traffic snooping impossible;&lt;br /&gt;
* Firewalling may be used inside a CT (the user can create rules limiting access to some services using the canonical iptables tool inside a CT). In other words, it is possible to set up firewall rules from inside a CT;&lt;br /&gt;
* Routing table manipulations and advanced routing features are supported for individual containers. For example, setting different maximum transmission units (MTUs) for different destinations, specifying different source addresses for different destinations, and so on.&lt;br /&gt;
&lt;br /&gt;
== Resource Management ==&lt;br /&gt;
&lt;br /&gt;
OpenVZ [[resource management]] controls the amount of resources available for containers. The controlled resources include such parameters as CPU power, disk space, a set of memory-related parameters, etc. Resource management allows OpenVZ to:&lt;br /&gt;
&lt;br /&gt;
* Effectively share available [[host system]] resources among CTs&lt;br /&gt;
* Guarantee Quality-of-Service (QoS)&lt;br /&gt;
* Provide performance and resource isolation and protect from denial-of-service attacks&lt;br /&gt;
* Collect usage information for system health monitoring&lt;br /&gt;
&lt;br /&gt;
Resource management is much more important for OpenVZ than for a standalone computer since computer resource utilization in a OpenVZ-based system is considerably higher than that in a typical system.&lt;br /&gt;
As all the CTs are using the same kernel, resource management is of paramount importance. Really, each CT should stay within its boundaries and not affect other CTs in any way — and this is what resource management does.&lt;br /&gt;
&lt;br /&gt;
OpenVZ resource management consists of four main components: two-level disk quota, fair CPU scheduler, disk I/O scheduler, and user beancounters. Please note that all those resources can be changed during CT runtime, there is no need to reboot. Say, if you want to give your CT less memory, you just change the appropriate parameters on the fly. This is either very hard to do or not possible at all with other virtualization approaches such as VM or hypervisor.&lt;br /&gt;
&lt;br /&gt;
=== Two-Level Disk Quota ===&lt;br /&gt;
[[Host system]] administrator ([[HW]] root) can set up a per-container [[disk quota]]s, in terms of disk blocks and inodes (roughly number of files). This is the first level of disk quota. In addition to that, a container administrator ([[CT]] root) can employ usual quota tools inside own CT to set standard UNIX per-user and per-group [[disk quota]]s.&lt;br /&gt;
&lt;br /&gt;
If one want to give a CT more disk space, you just increase its disk quota. No need to resize disk partitions etc.&lt;br /&gt;
&lt;br /&gt;
=== Fair CPU scheduler ===&lt;br /&gt;
CPU scheduler in OpenVZ is a two-level implementation of [[fair-share scheduling]] strategy.&lt;br /&gt;
&lt;br /&gt;
On the first level scheduler decides which CT is give the CPU time slice to, based on per-CT cpuunits values. On the second level the standard Linux scheduler decides which process to run in that container, using standard Linux process priorities and such.&lt;br /&gt;
&lt;br /&gt;
OpenVZ administrator can set up different values of &amp;lt;code&amp;gt;cpuunits&amp;lt;/code&amp;gt; for different containers, and the CPU time will be given to those proportionally.&lt;br /&gt;
&lt;br /&gt;
Also there is a way to limit CPU time, e.g. say that this container is limited to, say, 10% of CPU time available.&lt;br /&gt;
&lt;br /&gt;
=== I/O scheduler ===&lt;br /&gt;
Similar to the Fair CPU scheduler described above, I/O scheduler in OpenVZ is also two-level, utilizing Jens Axboe's CFQ I/O scheduler on its second level.&lt;br /&gt;
&lt;br /&gt;
Each container is assigned an I/O priority, and the I/O scheduler distributes the available I/O bandwidth according to the priorities assigned. Thus no single container can saturate an I/O channel.&lt;br /&gt;
&lt;br /&gt;
=== User Beancounters ===&lt;br /&gt;
&lt;br /&gt;
[[User beancounters]] is a set of per-CT counters, limits, and guarantees. There is a set of about 20 parameters which are carefully chosen to cover all the aspects of CT operation, so no single container can abuse any resource which is limited for the whole node and thus do harm to another CTs.&lt;br /&gt;
&lt;br /&gt;
Resources accounted and controlled are mainly memory and various in-kernel objects such as IPC shared memory segments, network buffers etc. etc. Each resource can be seen from &amp;lt;code&amp;gt;/proc/user_beancounters&amp;lt;/code&amp;gt; and has five values assiciated with it: current usage, maximum usage (for the lifetime of a container), barrier, limit, and fail counter. The meaning of barrier and limit is parameter-dependant; in short, those can be thought of as a soft limit and a hard limit. If any resource hits the limit, fail counter for it is increased, so CT administrator can see if something bad is happening by analyzing the output of &amp;lt;code&amp;gt;/proc/user_beancounters&amp;lt;/code&amp;gt; in her container.&lt;br /&gt;
&lt;br /&gt;
== Checkpointing and live migration ==&lt;br /&gt;
&lt;br /&gt;
{{Main|Checkpointing and live migration}}&lt;br /&gt;
&lt;br /&gt;
A live migration and checkpointing feature was released for OpenVZ in the middle of April 2006. It allows to migrate a container from one physical server to another without a need to shutdown/restart a container. The process is known as checkpointing: a CT is frozen and its whole state is saved to the file on disk. This file can then be transferred to another machine and a CT can be unfrozen (restored) there. The delay is about a few seconds, and it is not a downtime, just a delay.&lt;br /&gt;
&lt;br /&gt;
Since every piece of the container state, including opened network connections, is saved, from the user's perspective it looks like a delay in response: say, one database transaction takes a longer time than usual, when it continues as normal and user doesn't notice that his database is already running on the another machine.&lt;br /&gt;
&lt;br /&gt;
That feature makes possible scenarios such as upgrading your server without any need to reboot it: if your database needs more memory or CPU resources, you just buy a newer better server and live migrate your container to it, then increase its limits. If you want to add more RAM to your server, you migrate all containers to another one, shut it down, add memory, start it again and migrate all containers back.&lt;br /&gt;
&lt;br /&gt;
[[Category: Concepts]]&lt;br /&gt;
[[Category: Technology]]&lt;/div&gt;</summary>
		<author><name>Toutoune25</name></author>
		
	</entry>
</feed>