Difference between revisions of "User Guide/Print version"

From OpenVZ Virtuozzo Containers Wiki
Jump to: navigation, search
(created (not ready yet))
(No difference)

Revision as of 17:00, 12 January 2009

Preface

About This Guide

This guide is meant to provide comprehensive information on OpenVZ — high-end server virtualization software for Linux-based computers. The issues discussed in this guide cover the necessary theoretical conceptions as well as practical aspects of working with OpenVZ. The guide will familiarize you with the way to create and administer containers (sometimes also called Virtual Environments, or VEs) on OpenVZ-based Hardware Nodes and to employ the command line interface for performing various tasks.

Who Should Read This Guide

The primary audience for this book is anyone responsible for administering one or more systems running OpenVZ. To fully understand the guide, you should have strong Linux system administration habits. Attending Linux system administration training courses might be helpful. Still, no more than superficial knowledge of Linux OS is required in order to comprehend the major OpenVZ notions and learn to perform the basic administrative operations.

Organization of This Guide

Chapter 2, OpenVZ Philosophy, is a must-read chapter that helps you grasp the general principles of OpenVZ operation. It provides an outline of OpenVZ architecture, of the way OpenVZ stores and uses configuration information, of the things you as administrator are supposed to perform, and the common way to perform them.

Chapter 3, Installation and Preliminary Operations, dwells on all those things that must be done before you are able to begin the administration proper of OpenVZ. Among these things are a customized installation of Linux on a dedicated computer (Hardware Node, in OpenVZ terminology), OpenVZ installation, preparation of the Hardware Node for creating Virtual Private Servers on it, etc.

Chapter 4, Operations on Virtual Private Servers, covers those operations that you may perform on a container as on a single entity: creating and deleting Virtual Private Servers, starting and stopping them, etc.

Chapter 5, Managing Resources, zeroes in on configuring and monitoring the resource control parameters for different containers. These parameters comprise disk quotas, disk I/O, CPU and system resources. Common ways of optimizing your containers configurations are suggested at the end of the chapter.

Chapter 6, Advanced Tasks, enumerates those tasks that are intended for advanced system administrators who would like to obtain deeper knowledge about OpenVZ capabilities.

Chapter 7, Troubleshooting, suggests ways to resolve common inconveniences should they occur during your work with the OpenVZ software.

Chapter 8, Reference, is a complete reference on all OpenVZ configuration files and Hardware Node command-line utilities. You should read this chapter if you do not understand a file format or looking for an explanation of a particular configuration option, if you need help for a particular command or looking for a command to perform a certain task.

Documentation Conventions

Before you start using this guide, it is important to understand the documentation conventions used in it. For information on specialized terms used in the documentation, see the Glossary at the end of this document.

Typographical Conventions

The following kinds of formatting in the text identify special information.

Formatting convention Type of information Example

Italics

Used to emphasize the importance of a point or to introduce a term. Such servers are called Hardware Nodes.

Monospace

The names of commands, files, and directories. Use vzctl start to start a Container.
Preformatted
On-screen computer output in your command-line sessions.
Saved parameters for CT 101
Preformatted bold
What you type, as contrasted with on-screen computer output.
rpm -q quota

Shell Prompts in Command Examples

Command line examples throughout this guide presume that you are using the Bourne-again shell (bash). Whenever a command can be run as a regular user, we will display it with a dollar sign prompt. When a command is meant to be run as root, we will display it with a hash mark prompt:

Ordinary user shell prompt $
Root shell prompt #

General Conventions

Be aware of the following conventions used in this book.

  • Chapters in this guide are divided into sections, which, in turn, are subdivided into subsections. For example, Documentation Conventions is a section, and General Conventions is a subsection.
  • When following steps or using examples, be sure to type double-quotes ("), left single-quotes (`), and right single-quotes (') exactly as shown.
  • The key referred to as RETURN is labeled ENTER on some keyboards.

The root path usually includes the /bin, /sbin, /usr/bin and /usr/sbin directories, so the steps in this book show the commands in these directories without absolute path names. Steps that use commands in other, less common, directories show the absolute paths in the examples.

Getting Help

In addition to this guide, there are a number of other resources available for OpenVZ which can help you use it more effectively. These resources include:

Feedback

If you spot a typo in this guide, or if you have thought of a way to make this guide better, we would love to hear from you!

If you have a suggestion for improving the documentation (or any other relevant comments), try to be as specific as possible when formulating it. If you have found an error, please include the chapter/section/subsection name and some of the surrounding text so we can find it easily.

Please submit a report by e-mail to userdocs@openvz.org.

OpenVZ Philosophy

About OpenVZ Software

What is OpenVZ

OpenVZ is an container-based virtualization solution for Linux. OpenVZ creates multiple isolated partitions or containers (CTs) on a single physical server to utilize hardware, software, data center and management effort with maximum efficiency. Each CT performs and executes exactly like a stand-alone server for its users and applications as it can be rebooted independently and has its own root access, users, IP addresses, memory, processes, files, applications, system libraries, and configuration files. Light overhead and efficient design of OpenVZ makes it the right virtualization choice for production servers with live applications and real-life data.

The basic OpenVZ container capabilities are:

  • Dynamic Real-time Partitioning — Partition a physical server into tens of CTs, each with full dedicated server functionality.
  • Complete Isolation - Containers are secure and have full functional, fault and performance isolation.
  • Dynamic Resource Allocation - CPU, memory, network, disk and I/O can be changed without rebooting.
  • Mass Management — Manage a multitude of physical servers and containers in a unified way.

The OpenVZ containers virtualization model is streamlined for the best performance, management, and efficiency, maximizing the resource utilization.

OpenVZ Applications

OpenVZ provides a comprehensive solution allowing to:

  • Have hundreds of users with their individual full-featured containers sharing a single physical server;
  • Provide each user with a guaranteed Quality of Service;
  • Transparently move users and their environments between servers, without any manual reconfiguration.

If you administer a number of Linux dedicated servers within an enterprise, each of which runs a specific service, you can use OpenVZ to consolidate all these servers onto a single computer without losing a bit of valuable information and without compromising performance. A container behave just like an isolated stand-alone server:

  • Each Container has its own processes, users, files and provides full root shell access;
  • Each Container has its own IP addresses, port numbers, filtering and routing rules;
  • Each Container can have its own configuration for the system and application software, as well as its own versions of system libraries. It is possible to install or customize software packages inside a container independently from other CTs or the host system. Multiple distributions of a package can be run on one and the same Linux box.

In fact, hundreds of servers may be grouped together in this way. Besides the evident advantages of such consolidation (increased facility of administration and the like), there are some you might not even have thought of, say, cutting down electricity bills by many times!

OpenVZ proves invaluable for IT educational institutions that can now provide every student with a personal Linux server, which can be monitored and managed remotely. Software development companies may use Containers for testing purposes and the like.

Thus, OpenVZ can be efficiently applied in a wide range of areas: web hosting, enterprise server consolidation, software development and testing, user training, and so on.

Distinctive Features of OpenVZ

The concept of OpenVZ Containers is distinct from the concept of traditional virtual machines in the respect that Containers always run the same OS kernel as the host system (such as Linux on Linux, Windows on Windows, etc.). This single-kernel implementation technology allows to run Containers with a near-zero overhead. Thus, OpenVZ offers an order of magnitude higher efficiency and manageability than traditional virtualization technologies. OS Virtualization

Containers Virtualization

From the point of view of applications and Container users, each Container is an independent system. This independency is provided by a virtualization layer in the kernel of the host OS. Note that only a negligible part of the CPU resources is spent on virtualization (around 1-2%). The main features of the virtualization layer implemented in OpenVZ are the following:

  • Container looks like a normal Linux system. It has standard startup scripts, software from vendors can run inside Container without OpenVZ-specific modifications or adjustment;
  • A user can change any configuration file and install additional software;
  • Containers are fully isolated from each other (file system, processes, Inter Process Communication (IPC), sysctl variables);
  • Containers share dynamic libraries, which greatly saves memory;
  • Processes belonging to a Container are scheduled for execution on all available CPUs. Consequently, Containers are not bound to only one CPU and can use all available CPU power.

Network Virtualization

The OpenVZ network virtualization layer is designed to isolate Containers from each other and from the physical network:

  • Each Container has its own IP address; multiple IP addresses per Container are allowed;
  • Network traffic of a Container is isolated from the other Containers. In other words, Containers are protected from each other in the way that makes traffic snooping impossible;
  • Firewalling may be used inside a Container (the user can create rules limiting access to some services using the canonical iptables tool inside the Container). In other words, it is possible to set up firewall rules from inside a Container;
  • Routing table manipulations are allowed to benefit from advanced routing features. For example, setting different maximum transmission units (MTUs) for different destinations, specifying different source addresses for different destinations, and so on.

Templates

An OS template in OpenVZ is basically a set of packages from some Linux distribution used to populate one or more Containers. With OpenVZ, different distributions can co-exist on the same hardware box, so multiple OS templates are available. An OS template consists of system programs, libraries, and scripts needed to boot up and run the system (Container), as well as some very basic applications and utilities. Applications like a compiler and an SQL server are usually not included into an OS template. For detailed information on OpenVZ templates, see the Understanding Templates section.

Resource Management

OpenVZ Resource Management controls the amount of resources available to Virtual Private Servers. The controlled resources include such parameters as CPU power, disk space, a set of memory-related parameters. Resource management allows OpenVZ to:

  • Effectively share available Hardware Node resources among Containers;
  • Guarantee Quality-of-Service (QoS) in accordance with a service level agreement (SLA);
  • Provide performance and resource isolation and protect from denial-of-service attacks;
  • Simultaneously assign and control resources for a number of Containers, etc.
  • Resource Management is much more important for OpenVZ than for a standalone computer since computer resource utilization in an OpenVZ-based system is considerably higher than that in a typical system.

Main Principles of OpenVZ Operation

Basics of OpenVZ Technology

In this section we will try to let you form a more or less precise idea of the way the OpenVZ software operates on your computer. Please see the figure.

Figure 1. OpenVZ technology

This figure presumes that you have a number of physical servers united into a network. In fact, you may have only one dedicated server to effectively use the OpenVZ software for the needs of your network. If you have more than one OpenVZ-based physical server, each one of the servers will have a similar architecture. In OpenVZ terminology, such servers are called Hardware Nodes (or just Nodes), because they represent hardware units within a network.

OpenVZ is installed on an already installed Linux system configured in a certain way. For example, such customized configuration shall include the creation of a /vz partition, which is the basic partition for hosting Containers and which must be way larger than the root partition. This and similar configuration issues are most easily resolved during the Linux installation on the Hardware Node. Detailed instructions on installing Linux (called Host Operating System in the picture above) on the Hardware Node are provided in the Installation Guide.

OpenVZ is installed in such a way that you will be able to boot your computer either with OpenVZ support or without it. This support is usually presented as "openvz" in your boot loader menu and shown as OpenVZ Layer in the figure above.

However, at this point you are not yet able to create Containers. A Container is functionally identical to an isolated standalone server, having its own IP addresses, processes, files, users, its own configuration files, its own applications, system libraries, and so on. Containers share the same Hardware Node and the same OS kernel. However, they are isolated from each other. A Container is a kind of 'sandbox' for processes and users.

Different Containers can run different versions of Linux (for example, SUSE10 or Fedora 8 or Gentoo and many others). Each Container can run its own version of Linux. In this case we say that a Container is based on a certain OS template. OS templates are software packages available for download or created by you. Before you are able to create a Container, you should install the corresponding OS template. This is displayed as OpenVZ Templates in the scheme above.

After you have installed at least one OS template, you can create any number of Containers with the help of standard OpenVZ utilities, configure their network and/or other settings, and work with these Containers as with fully functional Linux servers.

OpenVZ Configuration

The OpenVZ software allows you to flexibly configure various settings for the OpenVZ system in general as well as for each and every Container. Among these settings are disk and user quota, network parameters, default file locations and configuration sample files, and others.

OpenVZ stores the configuration information in two types of files: the global configuration file /etc/vz/vz.conf and Container configuration files /etc/vz/conf/CTID.conf. The global configuration file defines global and default parameters for Container operation, for example, logging settings, enabling and disabling disk quota for Containers, the default configuration file and OS template on the basis of which a new Container is created, and so on. On the other hand, a Container configuration file defines the parameters for a given particular Container, such as disk quota and allocated resources limits, IP address and host name, and so on. In case a parameter is configured both in the global OpenVZ configuration file, and in the Container configuration file, the Container configuration file takes precedence. For a list of parameters constituting the global configuration file and the Container configuration files, see vz.conf(5) and ctid.conf(5) manual pages.

The configuration files are read when the OpenVZ software and/or Containers are started. However, OpenVZ standard utilities, for example, vzctl, allow you to change many configuration settings "on-the-fly", either without modifying the corresponding configuration files or with their modification (if you want the changes to apply the next time The OpenVZ software and/or Containers are started).

Licensing

OpenVZ is free software. It consists of the OpenVZ kernel and user-level tools, which are licensed by means of two different open source licenses.

Hardware Node Availability Considerations

Hardware Node availability is more critical than the availability of a typical PC server. Since it runs multiple Containers providing a number of critical services, Hardware Node outage might be very costly. Hardware Node outage can be as disastrous as the simultaneous outage of a number of servers running critical services.

In order to increase Hardware Node availability, we suggest you follow the recommendations below:

  • Use RAID storage for critical Container private areas. Do prefer hardware RAID, but software mirroring RAID might suit too as a last resort.
  • Do not run software on the Hardware Node itself. Create special Containers where you can host necessary services such as BIND, FTPD, HTTPD, and so on. On the Hardware Node itself, you need only the SSH daemon. Preferably, it should accept connections from a pre-defined set of IP addresses only.
  • Do not create users on the Hardware Node itself. You can create as many users as you need in any Container. Remember, compromising the Hardware Node means compromising all Containers as well.


Installation and Preliminary Operations

The current chapter provides exhaustive information on the process of installing and deploying your OpenVZ system including the pre-requisites and the stages you shall pass.

Installation Requirements

After deciding on the structure of your OpenVZ system, you should make sure that all the Hardware Nodes where you are going to deploy OpenVZ for Linux meet the following system (hardware and software) and network requirements.

System Requirements

This section focuses on the hardware and software requirements for the OpenVZ for Linux software product.

Hardware Compatibility

The Hardware Node requirements for the standard 32-bit edition of OpenVZ are the following:

  • IBM PC-compatible computer;
  • Intel Celeron, Pentium II, Pentium III, Pentium 4, Xeon, or AMD Athlon CPU;
  • At least 128 MB of RAM;
  • Hard drive(s) with at least 4 GB of free disk space;
  • Network card (either Intel EtherExpress100 (i82557-, i82558- or i82559-based) or 3Com (3c905 or 3c905B or 3c595) or RTL8139-based are recommended).

The computer should satisfy the Red Hat Enterprise Linux or Fedora Core hardware requirements (please, see the hardware compatibility lists at www.redhat.com).

The exact computer configuration depends on how many Virtual Private Servers you are going to run on the computer and what load these VPSs are going to produce. Thus, in order to choose the right configuration, please follow the recommendations below:

  • CPUs. The more Virtual Private Servers you plan to run simultaneously, the more CPUs you need.
  • Memory. The more memory you have, the more Virtual Private Servers you can run. The exact figure depends on the number and nature of applications you are planning to run in your Virtual Private Servers. However, on the average, at least 1 GB of RAM is recommended for every 20-30 Virtual Private Servers;
  • Disk space. Each Virtual Private Server occupies 400–600 MB of hard disk space for system files in addition to the user data inside the Virtual Private Server (for example, web site content). You should consider it when planning disk partitioning and the number of Virtual Private Servers to run.

A typical 2–way Dell PowerEdge 1650 1u–mountable server with 1 GB of RAM and 36 GB of hard drives is suitable for hosting 30 Virtual Private Servers.

Software Compatibility

The Hardware Node should run either Red Hat Enterprise Linux 6 or 5, or CentOS 6 or 5, or Scientific Linux 6 or 5. The detailed instructions on installing these operating systems for the best performance of OpenVZ are provided in the next sections.

This requirement does not restrict the ability of OpenVZ to provide other Linux versions as an operating system for Virtual Private Servers. The Linux distribution installed in a Virtual Private Server may differ from that of the host OS.

Network Requirements

The network pre-requisites enlisted in this subsection will help you avoid delays and problems with making OpenVZ for Linux up and running. You should take care in advance of the following:

  • Local Area Network (LAN) for the Hardware Node;
  • Internet connection for the Hardware Node;
  • Valid IP address for the Hardware Node as well as other IP parameters (default gateway, network mask, DNS configuration);
  • At least one valid IP address for each Virtual Private Server. The total number of addresses should be no less than the planned number of Virtual Private Servers. The addresses may be allocated in different IP networks;
  • If a firewall is deployed, check that IP addresses allocated for Virtual Private Servers are open for access from the outside.

Installing and Configuring Host Operating System on Hardware Node

This section explains how to install Fedora Core 4 on the Hardware Node and how to configure it for OpenVZ. If you are using another distribution, please consult the corresponding installation guides about the installation specifics.

Choosing System Type

Please follow the instructions from your Installation Guide when installing the OS on your Hardware Node. After the first several screens, you will be presented with a screen specifying the installation type. OpenVZ requires Server System to be installed, therefore select “Server” at the dialog shown in the figure below.

Figure 2: Fedora Core Installation - Choosing System Type It is not recommended to install extra packages on the Hardware Node itself due to the all-importance of Hardware Node availability (see the #Hardware Node Availability Considerations subsection in this chapter). You will be able to run any necessary services inside dedicated Virtual Private Servers.

Disk Partitioning

On the Disk Partitioning Setup screen, select Manual partition with Disk Druid. Do not choose automatic partitioning since this type of partitioning will create a disk layout intended for systems running multiple services. In case of OpenVZ, all your services shall run inside Virtual Private Servers.

Figure 3: Fedora Core Installation - Choosing Manual Partitioning

Create the following partitions on the Hardware Node:

Partition Description Typical size
/ 4–12 Gb
swap Paging partition for the Linux operating system 2 times RAM or RAM + 2GB depending on available HD space
/vz Partition to host OpenVZ templates and Virtual Private Servers all the remaining space on the hard disk

Many of the historical specifications for partitioning are outdated in an age where all hard drives are well above 20GB. So all minimums can be increased without any impact if you have plenty of drive space. It is suggested to use the ext3 or ext4 file system for the /vz partition. This partition is used for holding all data of the Virtual Private Servers existing on the Hardware Node. Allocate as much disk space as possible to this partition. It is not recommended to use the reiserfs file system as it is proved to be less stable than the ext3, and stability is of paramount importance for OpenVZ-based computers.

The root partition will host the operating system files. Fresh CentOS 6 install with basic server packages + OpenVZ kernel can occupy up to approximately 2 GB of disk space, so 4 GB is a good minimal size of the root partition. If you have plenty of drive space and think you may add additional software to the Node such as monitoring software then consider using more. Historically, the recommended size of the swap partition has been two times the size of physical RAM installed. Now, with minimum server RAM often above 2GB a more reasonable specification might be RAM + 2GB if RAM is above 2GB and HD space is limited.

Finishing OS Installation

After the proper partitioning of your hard drive(s), proceed in accordance with your OS Installation Guide. While on the Network Configuration screen, you should ensure the correctness of the Hardware Node’s IP address, host name, DNS, and default gateway information. If you are using DHCP, make sure that it is properly configured. If necessary, consult your network administrator. On the Firewall Configuration screen, choose No firewall. Option Enable SELinux should be set to Disabled.

Fedora Core Installation - Disabling Firewall and SELinux After finishing the installation and rebooting your computer, you are ready to install OpenVZ on your system.

Installing OpenVZ Software

Downloading and Installing OpenVZ Kernel

First of all, you should download the kernel binary RPM from http://openvz.org/download/kernel/. You need only one kernel RPM, so please choose the appropriate kernel binary depending on your hardware:

if you use Red Hat Enterprise 5, or Centos 5, or Scientific Linux 5:

  • If there is more than one CPU available on your Hardware Node (or a CPU with hyperthreading), select the vzkernel-smp RPM.
  • If there is more than 4 Gb of RAM available, select the vzkernel-enterprise RPM.
  • Otherwise, select the uniprocessor kernel RPM (vzkernel-version).

if you use Red Hat Enterprise 6, or Centos 6, or Scientific Linux 6:

  • select the uniprocessor kernel RPM (vzkernel-version).

Next, you shall install the kernel RPM of your choice on your Hardware Node by issuing the following command:

# rpm -ihv vzkernel-name*.rpm
Yellowpin.svg Note: You should not use the rpm –U command (where -U stands for "upgrade"); otherwise, all the kernels currently installed on the Node will be removed.

Configuring Boot Loader

In case you use the GRUB loader, it will be configured automatically. You should only make sure that the lines below are present in the /boot/grub/grub.conf file on the Node:

title Fedora Core (2.6.8-022stab029.1)
     root (hd0,0)
     kernel /vmlinuz-2.6.8-022stab029.1 ro root=/dev/sda5 quiet rhgb
     initrd /initrd-2.6.8-022stab029.1.img

However, we recommend that you configure this file in the following way:

  • Change Fedora Core to OpenVZ (just for clarity, so the OpenVZ kernels will not be mixed up with non OpenVZ ones).
  • Remove all extra arguments from the kernel line, leaving only the root=... parameter.

At the end, the modified grub.conf file should look as follows:

title OpenVZ (2.6.8-022stab029.1)
     root (hd0,0)
     kernel /vmlinuz-2.6.8-022stab029.1 ro root=/dev/sda5
     initrd /initrd-2.6.8-022stab029.1.img

Setting sysctl parameters

There are a number of kernel limits that should be set for OpenVZ to work correctly. OpenVZ is shipped with a tuned /etc/sysctl.conf file. Below are the contents of the relevant part of /etc/sysctl.conf:

# On Hardware Node we generally need
# packet forwarding enabled and proxy arp disabled
net.ipv4.ip_forward = 1
net.ipv4.conf.default.proxy_arp = 0
# Enables source route verification
net.ipv4.conf.all.rp_filter = 1
# Enables the magic-sysrq key
kernel.sysrq = 1
# TCP Explict Congestion Notification
net.ipv4.tcp_ecn = 0
# we do not want all our interfaces to send redirects
net.ipv4.conf.default.send_redirects = 1
net.ipv4.conf.all.send_redirects = 0

Please edit the file as described. To apply the changes issue the following command:

# sysctl -p

Alternatively, the changes will be applied upon the following reboot.

It is also worth mentioning that normally you should have forwarding (net.ipv4.ip_forward) turned on since the Hardware Node forwards the packets destined to or originating from the Virtual Private Servers.

After that, you should reboot your computer and choose "OpenVZ" on the boot loader menu.

Downloading and Installing OpenVZ Packages

After you have successfully installed and booted the OpenVZ kernel, you can proceed with installing the user-level tools for OpenVZ. You should install the following OpenVZ packages:

  • vzctl: this package is used to perform different tasks on the OpenVZ Virtual Private Servers (create, destroy, start, stop, set parameters etc.).
  • vzquota: this package is used to manage the VPS quotas.

You can download the corresponding binary RPMs from http://openvz.org/download/utils/.

On the next step, you should install these utilities by using the following command:

# rpm –Uhv vzctl*.rpm vzquota*.rpm
Yellowpin.svg Note: During the packages installation, you may be presented with a message telling you that rpm has found unresolved dependencies. In this case you have to resolve these dependencies first and then repeat the installation.

Now you can launch OpenVZ. To this effect, execute the following command:

# /etc/init.d/vz start

This will load all the needed OpenVZ kernel modules. During the next reboot, this script will be executed automatically.

Installing OS Templates

Template is a set of package files to be installed into a Container. Operating system templates are used to create new Containers with a pre-installed operating system. Therefore, you are bound to download at least one OS template and put it into /vz/template/cache/ directory on the Hardware Node.

Links to all available OS templates at listed at Download/template/precreated.

For example, this is how to download the CentOS 5 OS template:

# cd /vz/template/cache
# wget http://download.openvz.org/template/precreated/centos-5-x86.tar.gz