vzctl for upstream kernel

From OpenVZ Virtuozzo Containers Wiki
Revision as of 14:23, 22 June 2015 by Sergey Bronnikov (talk | contribs) (rename article)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

This article describes using OpenVZ tool vzctl as an alternative to LXC tools.

Recent vzctl releases (starting from version 4.0) can be used with upstream (non-OpenVZ) Linux kernels (that essentially means any recent 3.x kernel). At the moment, it provides just basic functionality. It is currently possible to create, start and stop a container with the same steps as one would use for a normal OpenVZ container. Other features may be present with limited functionality, while some are not present at all. We appreciate all bug reports, please file to bugzilla.

Running vzctl on upstream kernels is considered an experimental feature. See #Limitations below.


Yellowpin.svg Note: This section describes installation for RPM-based distros. See #Building below if you want to compile vzctl from source.

First, set up OpenVZ yum repository. Download openvz.repo file and put it to your /etc/yum.repos.d/ repository, and import OpenVZ GPG key used for signing RPM packages. This can be achieved by the following commands, as root:

wget -P /etc/yum.repos.d/ http://download.openvz.org/openvz.repo
rpm --import http://download.openvz.org/RPM-GPG-Key-OpenVZ

In case you can not cd to /etc/yum.repos.d, it means either yum is not installed on your system, or yum version is too old.

Then, install vzctl-core package:

yum install vzctl-core


For supported features, usage is expected to be the same as standard vzctl tool. See vzctl(8) for more information.


Yellowpin.svg Note: IP mode networking (--ipadd / --ipdel) is currently not supported

Networking is available through the switches --netdev_add, --netif_add, and their respective deletion counterparts. Unfortunately now it requires some manual configuration.

Bridged networking[edit]

The following example assumes

  • you already have a bridge configured on the host system
  • bridge interface name is virbr0
  • CT is running Red Hat like distro (CentOS)
vzctl set $CTID --netif_add eth0,,,,virbr0 --save
echo "NETWORKING=yes" > /vz/private/$CTID/etc/sysconfig/network
cat << EOF > /vz/private/$CTID/etc/sysconfig/network-scripts/ifcfg-eth0
vzctl start $CTID

After this, you can find CT IP using this:

# ip netns exec $CTID ip address list


Yellowpin.svg Note: We recommend using OpenVZ kernel for features, stability and security

The following vzctl commands are not working at all with the non-OpenVZ kernel:

  • quotaon/quotaoff/quotainit (vzquota-specific)
  • convert, compact, snapshot* (ploop-specific)
  • console (needs a virtual /dev/console, /dev/ttyN device)
  • chkpnt, restore (currently need OpenVZ-kernel-specific checkpointing, CRIU will be supported later)

The following binaries are not ported to work on top of upstream kernel:

  • vzlist
  • vzcalc
  • vzcfgvalidate
  • vzcpucheck
  • vzmemcheck
  • vzmigrate
  • vzeventd
  • vzpid
  • vzsplit
  • vzubc

/proc and /sys[edit]

Software that depend on information supplied by the proc filesystem may not work correctly, since there is not a full solution for full /proc virtualization. For instance, /proc/stat is not yet virtualized, and top will show distorted values.

Resource management[edit]

With non-OpenVZ kernel, setting resources like --ram and --cpuunits works, but there their effect is dependent on what the current kernel supports, through the cgroups subsystem. When a particular cgroup file is present, it will be used. Currently, vzctl will search for the following files:

  • cpu.cfs_quota_us
  • cpu.shares
  • cpuset.cpus
  • memory.limit_in_bytes
  • memory.memsw.limit_in_bytes
  • memory.kmem.limit_in_bytes
  • memory.kmem.tcp.limit_in_bytes


In case you don't want to use packages provided by OpenVZ (available from Download/vzctl), but rather would like to compile vzctl from sources, read on.


The following software needs to be installed on your system:

  • iproute2 >= 3.0.0 (runtime only)
  • libcgroup >= 0.38


You can get the latest released version from Download/vzctl/4.11.1#sources or directly from download:utils/vzctl/current/src/.

If you are living on the bleeding edge, get vzctl sources from git. Then run autogen.sh to recreate auto* files:

git clone https://src.openvz.org/scm/ovzl/vzctl.git
cd vzctl


Usual ./configure && make should do. But you probably want to specify more options. It makes sense to:

  • enable cgroup support
  • add --without-ploop (unless you want ploop compiled it) because otherwise you will need ploop lib headers (available from Download/ploop).
  • enable bash completion support
  • set prefix to /usr

See ./configure --help output for more details and options available.

So, the command will look like:

$ ./configure --with-cgroup --without-ploop --enable-bashcomp --prefix=/usr 
$ make -j4


# make install

vzctl is also bundled in some Linux distributions, so you can install vzctl using native distro tools (i.e. your package manager):

Known issues and workarounds[edit]

A container doesn't boot and udevd is in a process list[edit]

udev doesn't work, because uevents are not virtualized yet. If you don't know how to disable it, you can remove the udev package.

vzctl enter doesn't work[edit]

You see this when trying to use vzctl enter:

Unable to open pty: No such file or directory

If a CT is executed in a user namespace, devpts must be mounted with the newinstance option. You can add this option in container's /etc/fstab file.

See also[edit]