User Guide/Operations on Virtual Private Servers

From OpenVZ Virtuozzo Containers Wiki
Revision as of 16:39, 13 January 2009 by Kir (talk | contribs) (created (not finished yet))
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

This chapter describes how to perform day-to-day operations on separate Containers taken in their wholeness.

{{Note|We assume that you have successfully installed, configured, and deployed your OpenVZ system. In case you have not, please turn to Chapter 3 providing detailed information on all these operations.

Creating and Configuring New Container

This section guides you through the process of creating a Container. We assume that you have successfully installed OpenVZ and at least one OS template. If there are no OS templates installed on the Hardware Node, turn to the User Guide/Managing Templates/ chapter first.

Before you Begin

Before you start creating a Container, you should:

  • Check that the Hardware Node is visible on your network. You should be able to connect to/from other hosts. Otherwise, your Containers will not be accessible from other computers.
  • Check that you have at least one IP address per Container and the addresses belong to the same network as the Hardware Node or routing to the Containers has been set up via the Hardware Node.

To create a new Container, you have to:

  • choose the new Container ID;
  • choose the OS template to use for the Container;
  • create the Container itself.

Choosing Container ID

Every Container has a numeric ID, also known as CT ID, associated with it. The ID is a 32-bit integer number beginning with zero and unique for a given Hardware Node. When choosing an ID for your Container, please follow the simple guidelines below:

  • ID 0 is used for the Hardware Node itself. You cannot and should not try to create a Container with ID 0.
  • OpenVZ reserves the IDs ranging from 0 to 100. Though OpenVZ uses only ID 0, different versions might use additional Containers IDs for internal needs. To facilitate upgrading, please do not create Containers with IDs below 101.

The only strict requirement for a CT ID is to be unique for a particular Hardware Node. However, if you are going to have several computers running OpenVZ, we recommend assigning different CT ID ranges to them. For example, on Hardware Node 1 you create Containers within the range of IDs from 101 to 1000; on Hardware Node 2 you use the range from 1001 to 2000, and so on. This approach makes it easier to remember on which Hardware Node a Container has been created, and eliminates the possibility of CT ID conflicts when a Container migrates from one Hardware Node to another.

Another approach to assigning CT IDs is to follow some pattern of CT IP addresses. Thus, for example, if you have a subnet with the 10.0.x.x address range, you may want to assign the 17015 ID to the CT with the 10.0.17.15 IP address, the 39108 ID to the CT with the 10.0.39.108 IP address, and so on. This makes it much easier to run a number of OpenVZ utilities eliminating the necessity to check up the CT IP address by its ID and similar tasks. You can also think of your own patterns for assigning CT IDs depending on the configuration of your network and your specific needs.

Before you decide on a new CT ID, you may want to make sure that no CT with this ID has yet been created on the Hardware Node. The easiest way to check whether the CT with the given ID exists is to issue the following command:

# vzlist -a 101
CT not found

This output shows that Container 101 does not exist on the particular Hardware Node; otherwise it would be present in the list.

Choosing OS Template

Next, you shall decide on which OS template you want to base the new CT. There might be several OS templates installed on the Hardware Node; use the vzpkgls command to find out the templates installed on your system:

# vzpkgls
fedora-core-3
fedora-core-4
centos-4

Creating Container

After the CT ID and the installed OS template have been chosen, you can create the CT private area with the vzctl create command. The private area is the directory containing the private files of the given CT. The private area is mounted to the /vz/root/vpsid/ directory on the Hardware Node and provides CT users with a complete Linux file system tree.

The vzctl create command requires only the CT ID and the name of the OS template as arguments; however, in order to avoid setting all the CT resource control parameters after creating the private area, you can specify a sample configuration to be used for your new Container. The sample configuration files are residing in the /etc/sysconfig/vz-scripts directory and have names with the following mask: ve-config_name.conf-sample. The most commonly used sample is the ve-vps.basic.conf-sample file; this sample file has resource control parameters suitable for most web site Containers.

Thus, for example, you can create a new CT by typing the following string:

# vzctl create 101 --ostemplate fedora-core-4 -–config vps.basic
Creating CT private area
CT private area was created

In this case, OpenVZ will create a Container with ID 101, the private area based on the fedora-core-4 OS template, and configuration parameters taken from the ve‑vps.basic.conf-sample sample configuration file.

If you specify neither an OS template nor a sample configuration, vzctl will try to take the corresponding values from the global OpenVZ configuration file /etc/sysconfig/vz. So you can set the default values in this file using your favorite text file editor, for example:

DEF_OSTEMPLATE="fedora-core-4"
CONFIGFILE="vps.basic"

and do without specifying these parameters each time you create a new CT.

Now you can create a CT with ID 101 with the following command:

# vzctl create 101
Creating CT private area: /vz/private/101
CT is mounted
Postcreate action done
CT is unmounted
CT private area was created

In principle, now you are ready to start your newly created Container. However, typically you need to set its network IP address, host name, DNS server address and root password before starting the Container for the first time. Please see the next subsection for information on how to perform these tasks.

Configuring Container

Configuring a Container consists of several tasks:

  • Setting Container startup parameters;
  • Setting Container network parameters;
  • Setting Container user passwords;
  • Configuring Quality of Service (Service Level) parameters.

For all these tasks, the vzctl set command is used. Using this command for setting CT startup parameters, network parameters, and user passwords is explained later in this subsection. Service Level Management configuration topics are dwelled upon in the Managing Resources chapter.

Setting Startup Parameters

The following options of the vzctl set command define the CT startup parameters: onboot and capability. To make the Container 101 automatically boot at Hardware Node startup, issue the following command:

# vzctl set 101 --onboot yes --save
Saved parameters for CT 101

Setting Network Parameters

In order to be accessible from the network, a Container shall be assigned a correct IP address and host name; DNS server addresses shall also be configured. The session below illustrates setting the Container 101 network parameters:

# vzctl set 101 --hostname test101.my.org --save 
Hostname for CT set: test101.my.org
Saved parameters for CT 101
# vzctl set 101 --ipadd 10.0.186.1 --save
Adding IP address(es): 10.0.186.1
Saved parameters for CT 101
# vzctl set 101 --nameserver 192.168.1.165 --save
File resolv.conf was modified
Saved parameters for CT 101

This command will assign CT 101 the IP address of 10.0.186.1, the host name of test101.my.org, and set the DNS server address to 192.168.1.165. The –-save flag saves all the parameters to the CT configuration file.

You can issue the above commands when the Container is running. In this case, if you do not want the applied values to persist, you can omit the –-save option and the applied values will be valid only until the Container shutdown.

To check whether SSH is running inside the Container, use vzctl exec, which allows executing any commands in the Container context.

# vzctl start 101
[This command starts CT 101, if it is not started yet]
# vzctl exec 101 service sshd status
sshd is stopped
# vzctl exec 101 service sshd start
Starting sshd: [  OK  ]
# vzctl exec 101 service sshd status
sshd (pid 16036) is running...

The above example assumes that CT 101 is created on the Fedora Core template. For other OS templates, please consult the corresponding OS documentation.

For more information on running commands inside a CT from the Hardware Node, see the #Running Commands in Container subsection.

Setting root Password for CT

By default, the root account is locked in a newly created CT, and you cannot log in. In order to log in to the CT, it is necessary to create a user account inside the Container and set a password for this account or unlock the root account. The easiest way of doing it is to run:

# vzctl start 101
[This command starts CT 101, if it is not started yet]
# vzctl set 101 --userpasswd root:test

In this example, we set the root password for CT 101 to “test”, and you can log in to the Container via SSH as root and administer it in the same way as you administer a standalone Linux computer: install additional software, add users, set up services, and so on. The password will be set inside the CT in the /etc/shadow file in an encrypted form and will not be stored in the CT configuration file. Therefore, if you forget the password, you have to reset it. Note that --userpasswd is the only option of the vzctl set command that never requires the --save switch, the password is anyway persistently set for the given Container.

While you can create users and set passwords for them using the vzctl exec or vzctl set commands, it is suggested that you delegate user management to the Container administrator advising him/her of the CT root account password.

Starting, Stopping, Restarting, and Querying Status of Container

When a Container is created, it may be started up and shut down like an ordinary computer. To start Container 101, use the following command:

# vzctl start 101
Starting CT ...
CT is mounted
Adding IP address(es): 10.0.186.101
Hostname for CT 101 set: test.my.org
CT start in progress...

To check the status of a CT, use the vzctl status ctid command:

# vzctl status 101
CT 101 exist mounted running

Its output shows the following information:

  • Whether the CT private area exists;
  • Whether this private area is mounted;
  • Whether the Container is running.

In our case, vzctl reports that CT 101 exists, its private area is mounted, and the CT is running. Alternatively, you can make use of the vzlist utility:

# '''vzlist 101'''
CTID     NPROC STATUS  IP_ADDR         HOSTNAME       
 101         20 running 10.0.186.101    test.my.org

Still another way of getting the CT status is checking the /proc/vz/veinfo file. This file lists all the Containers currently running on the Hardware Node. Each line presents a running Container in the CT_ID reserved number_of_processes IP_address [IP_address ...] format:

# '''cat /proc/vz/veinfo'''
       101     0    20   10.0.186.1
         0     0    48

This output shows that CT 101 is running, there are 20 running processes inside the CT, and its IP address is 192.168.1.1. Note that second field is reserved; it has no special meaning and should always be zero.

The last line corresponds to the CT with ID 0, which is the Hardware Node itself. The following command is used to stop a Container:

# vzctl stop 101
Stopping CT ...
CT was stopped
CT is unmounted
# vzctl status 101
CT 101 exist unmounted down

vzctl has a two-minute timeout for the CT shutdown scripts to be executed. If the CT is not stopped in two minutes, the system forcibly kills all the processes in the Container. The Container will be stopped in any case, even if it is seriously damaged. To avoid waiting for two minutes in case of a Container that is known to be corrupt, you may use the --fast switch:

# vzctl stop 101 --fast
Stopping CT ...
CT was stopped
CT is unmounted

Make sure that you do not use the --fast switch with healthy CTs, unless necessary, as the forcible killing of CT processes may be potentially dangerous.

The vzctl start and vzctl stop commands initiate the normal Linux OS startup or shutdown sequences inside the Container. In case of a Red Hat-like distribution, System V initialization scripts will be executed just like on an ordinary computer. You can customize startup scripts inside the Container as needed.

To restart a Container, you may as well use the vzctl restart command:

# vzctl restart 101
Restarting CT
Stopping CT ...
CT was stopped
CT is unmounted
Starting CT ...
CT is mounted
Adding IP address(es): 10.0.186.101
CT start in progress...

Listing Containers

Very often you may want to get an overview of the Containers existing on the given Hardware Node and to get additional information about them - their IP addresses, hostnames, current resource consumption, etc. In the most general case, you may get a list of all CTs by issuing the following command:

# vzlist -a
     CTID      NPROC STATUS  IP_ADDR         HOSTNAME       
       101          8 running 10.101.66.1     vps101.my.org
       102          7 running 10.101.66.159   vps102.my.org
       103          - stopped 10.101.66.103   vps103.my.org

The -a switch tells the vzlist utility to output both running and stopped CTs. By default, only running CTs are shown. The default columns inform you of the CT IDs, the number of running processes inside CTs, their status, IP addresses, and hostnames. This output may be customized as desired by using vzlist command line switches. For example:

# '''vzlist -o veid,diskinodes.s -s diskinodes.s'''
     CTID DQINODES.S
         1     400000
       101     200000
       102     200000

This shows only running CTs with the information about their IDs and soft limit on disk inodes (see the User Guide/Managing Resources chapter for more information), with the list sorted by this soft limit. The full list of the vzlist command line switches and output and sorting options is available in the vzlist subsection of the User Guide/Reference chapter.

Deleting Container

You can delete a Container that is not needed anymore with the vzctl destroy CT_ID command. This command removes the Container private area completely and renames the CT configuration file and action scripts by appending the .destroyed suffix to them.

A running CT cannot be destroyed with the vzctl destroy command. The example below illustrates destroying CT 101:

# vzctl destroy 101
CT is currently mounted (umount first)
# vzctl stop 101
Stopping CT ...
CT was stopped
CT is unmounted
# vzctl destroy 101
Destroying CT private area: /vz/private/101
CT private area was destroyed
# ls /etc/sysconfig/vz-scripts/101.*
/etc/sysconfig/vz-scripts/101.conf.destroyed
/etc/sysconfig/vz-scripts/101.mount.destroyed
/etc/sysconfig/vz-scripts/101.umount.destroyed
# vzctl status 101
CT 101 deleted unmounted down

If you do not need the backup copy of the CT configuration files (with the .destroyed suffix), you may delete them manually.

Running Commands in Container

Usually, a Container administrator logs in to the CT via network and executes any commands in the CT as on any other Linux box. However, you might need to execute commands inside Containers bypassing the normal login sequence. This can happen if:

  • You do not know the Container login information, and you need to run some diagnosis commands inside the CT in order to verify that it is operational.
  • Network access is absent for a Container. For example, the CT administrator might have accidentally applied incorrect firewalling rules or stopped SSH daemon.

OpenVZ allows you to execute commands in a Container in these cases. Use the vzctl exec CT_ID command for running a command inside the CT with the given ID. The session below illustrates the situation when SSH daemon is not started:

# vzctl exec 101 /etc/init.d/sshd status
sshd is stopped
# vzctl exec 101 /etc/init.d/sshd start
Starting sshd:[  OK  ]
# vzctl exec 101 /etc/init.d/sshd status
sshd (pid 26187) is running...

Now CT users can log in to the CT via SSH.

When executing commands inside a Container from shell scripts, use the vzctl exec2 command. It has the same syntax as vzctl exec but returns the exit code of the command being executed instead of the exit code of vzctl itself. You can check the exit code to find out whether the command has completed successfully.

If you wish to execute a command in all running CTs, you can use the following script:

# for i in `vzlist –o veid -H`; do \
echo "CT $i"; vzctl exec $i <command>; done

where <command> is the command to be executed in all the running CTs. For example:

# for i in `vzlist –o veid -H`; do\
echo "CT $i"; vzctl exec $i uptime; done
CT 101
  2:26pm  up 6 days,  1:28,  0 users,  load average: 0.00, 0.00, 0.00
CT 102
  2:26pm  up 6 days,  1:39,  0 users,  load average: 0.00, 0.00, 0.00
[The rest of the output is skipped...]