Gentoo template creation

From OpenVZ Virtuozzo Containers Wiki
Jump to: navigation, search

This page is about making a template cache for OpenVZ container from Gentoo Linux. The method is basically the same as described in Slackware template creation article.

Download stage3

We will make the template from a stage3 file. An OpenVZ OS template should be an archive (.tar.gz) of the root of a working system, but without the kernel and some files. You can download stage3 from the nearest mirror from http://www.gentoo.org/main/en/mirrors.xml or directly from http://distfiles.gentoo.org/releases/x86/current-stage3/


Or try to Download 64 bit stage3

If you experience with 32bit containers you can also try to create Gentoo template with 64bits binary support. Try to download 64bit stage3. For 64bit Gentoo template creation, search for nearest mirrors http://www.gentoo.org/main/en/mirrors.xml or directly from:

http://distfiles.gentoo.org/releases/amd64/autobuilds/current-stage3/ with 32bit binary multilib support, or


http://distfiles.gentoo.org/releases/amd64/autobuilds/current-stage3/hardened/ (stage3-amd64-hardened+nomultilib-20xxxxxx.tar.bz2) try the hardend profile without multilib support (only 64bit binary support for gentoo template containers!!)


Don't forget that hostnode must support 64bit binary too, with or without 32bit multilib support!! Hostnodes that supports multilib can start 64bits containers with 32bits containers but with a slight performance degradation.

Don't forget to look for:

ACCEPT_KEYWORDS="amd64" in /etc/make.conf

there you can accept 64bit binary packages support for your containers.

cat /proc/cpuinfo

for 64 bits Intel\AMD CPU instructions set support.


WARNING: There is no warrantied that template should work, you can bugtack the errors. But I haven't confronted with problem.

Create directories for the new container and unarchive stage3

mkdir /vz/root/1001
mkdir /vz/private/1001
tar -xvjpf /root/stage3-i686-20111213.tar.bz2 -C /vz/private/1001

Create CT config

Now you need to create the configuration file for the container, 1001.conf:

vzctl set 1001 --applyconfig basic --save

Gentoo users wil see warning, but nothing to worry just ignore it

WARNING: /etc/vz/conf/1001.conf not found: No such file or directory

If you get the following error, you need to change the file /etc/vz/vz.conf to "VE_LAYOUT=simfs". Unfortunately, I couldn't find a solution for ploop.

# vzctl set 1001 --applyconfig basic --save
Error in ploop_open_dd (di.c:288): Can't resolve /vz/private/1001/root.hdd/DiskDescriptor.xml: No such file or directory
Failed to read /vz/private/1001/root.hdd/DiskDescriptor.xml
Error: failed to apply some parameters, not saving configuration file!

Edit CT Config

First, you need to let vzctl know that this CT is using Gentoo:

echo 'OSTEMPLATE="gentoo"' >> /etc/vz/conf/1001.conf

Creation of container at end of this HowTo obeys quota limits and might exceed those limits set in vps.basic by default (at least encountered with Gentoo 10.1 release). Thus it might be required to increase limits now. The following values are providing 2 GB soft limit with 2.5 GB hard limit:

DISKSPACE="2.4G:2.5G"

If you use independed Gentoo portage tree for each container, is considered correct in the use of gentoo containers, don't forget to raise inodes number

DISKINODES="400000:420000"

You should also increase the ram to a minimum of 512 MB. Otherwise, you will get errors during compilation. Since vzctl 3.0.30 you can do:

vzctl set 1001 --ram 512M --swap 1G --save

Prior to vzctl 3.0.30 you have to do, which gives you 512 MB guaranteed and 1024 MB burstable:

vzctl set 1001 --vmguarpages 512M --save
vzctl set 1001 --oomguarpages 512M --save
vzctl set 1001 --privvmpages 512M:1024M --save
vzctl set 1001 --swappages 0:1024M --save

Independed Gentoo portage tree for each container would be good idea, because newer portage can delete older ebuilds already installed in container with other dependences. Otherwise you can't reinstall already installed packages if you bind the newer version of portage in gentoo containers.

After that you copy that configuration file turning it into a sample configuration for later use:

# cp /etc/vz/conf/1001.conf /etc/vz/conf/ve-gentoo.conf-sample

Make /etc/mtab a symlink to /proc/mounts

The container's root filesystem is mounted by the host system, not the guest — and therefore root fs will not appear in /etc/mtab. It will lead to a non-working df command. To fix, link /etc/mtab to /proc/mounts.

rm -f /vz/private/1001/etc/mtab
ln -s /proc/mounts /vz/private/1001/etc/mtab

After replacing /etc/mtab with a symlink to /proc/mounts, you will always have up-to-date information of what is mounted in /etc/mtab. You will, however, have an error on boot (in /var/log/init.log) that can be safely ignored: * /etc/mtab is not updateable [ !! ]

Replace /etc/fstab

echo "proc /proc proc defaults 0 0" > /vz/private/1001/etc/fstab

We need only /proc to be mounted at boot time.

Edit /etc/inittab and /etc/init.d/halt.sh

Edit /vz/private/1001/etc/inittab and put a hash mark (#) at the beginning of the lines containing:

# TERMINALS
c1:12345:respawn:/sbin/agetty 38400 tty1 linux
c2:2345:respawn:/sbin/agetty 38400 tty2 linux
c3:2345:respawn:/sbin/agetty 38400 tty3 linux
c4:2345:respawn:/sbin/agetty 38400 tty4 linux
c5:2345:respawn:/sbin/agetty 38400 tty5 linux
c6:2345:respawn:/sbin/agetty 38400 tty6 linux

Just like that:

# TERMINALS
#c1:12345:respawn:/sbin/agetty 38400 tty1 linux
#c2:2345:respawn:/sbin/agetty 38400 tty2 linux
#c3:2345:respawn:/sbin/agetty 38400 tty3 linux
#c4:2345:respawn:/sbin/agetty 38400 tty4 linux
#c5:2345:respawn:/sbin/agetty 38400 tty5 linux
#c6:2345:respawn:/sbin/agetty 38400 tty6 linux

Edit /vz/private/1001/etc/init.d/halt.sh and put a hash mark (#) at the beginning of the lines containing:

sulogin -t 10 /dev/console

File /vz/private/1001/etc/init.d/halt.sh is deleted in Gentoo 11.2 and doesn't need to edit.

This prevents getty and login from starting on ttys that do not exist in containers.

Edit /etc/shadow

Edit /vz/private/1001/etc/shadow and change root's password in the first line to an exclamation mark (!):

root:!:10071:0:::::

This will disable root login until the password is changed with vzctl set CTID --userpasswd root:password.

Disable unneeded init scripts

The checkroot and consolefont init scripts should not be started inside containers: (NOT! for Gentoo 11.2)

rm /vz/private/1001/etc/runlevels/boot/checkroot
rm /vz/private/1001/etc/runlevels/boot/consolefont

Gentoo 11.2 release have an option in rc.conf just uncoment rc_sys and type "openvz" and it disables init scripts

nano /vz/private/1001/etc/rc.conf
rc_sys="openvz"

Edit /sbin/rc

Edit /vz/private/1001/sbin/rc and put a hash mark (#) at the beginning of line 244 (your line number may be different):

# try mount -n ${mntcmd:--t sysfs sysfs /sys -o noexec,nosuid,nodev}

This prevents the container from attempting to mount /sys.

To ensure that this change isn't automatically overwritten on update, add the following to /vz/private/1001/etc/make.conf:

CONFIG_PROTECT="/sbin/rc"

Gentoo 11.2 /vz/private/1001/sbin/rc is binary, i just skipped this post

Set up udev

Using udev you will have problems since some devices nodes are not created. For example sshd will fail to start since /dev/random and /dev/urandom are missing. So it's recommended to disable udev. Edit /vz/private/1001/etc/conf.d/rc or /vz/private/1001/etc/conf.d/udev if you using Gentoo 11.2 or future and change the RC_DEVICES line to:

RC_DEVICES="static"

Baselayout 2 and OpenRC: /vz/private/1001/etc/conf.d/rc is obsolete (http://www.gentoo.org/doc/en/openrc-migration.xml#doc_chap2_sect2) and /vz/private/1001/etc/rc.conf should be used instead. But, RC_DEVICES is missing in /vz/private/1001/etc/rc.conf?!?

If you want to enable udev read on.

Create some device nodes needed to enter a container:

cd /vz/private/1001/lib
mknod udev/devices/ttyp0 c 3 0
mknod udev/devices/ptyp0 c 2 0
mknod udev/devices/ptmx c 5 2

Edit /vz/private/1001/etc/conf.d/rc or /vz/private/1001/etc/conf.d/udev if you using Gentoo 11.2 or future and change the RC_DEVICES and RC_DEVICE_TARBALL lines to:

RC_DEVICES="udev"
RC_DEVICE_TARBALL="no"

You have to leave the directory you are in for the next step to be OK, otherwise you will get this error message:

vzquota : (error) Quota on syscall for 1001: Device or resource busy
vzquota on failed [3]
cd /

Edit /etc/pam.d/chpasswd

Some changes are required for successful setting user's password with vzctl util. Edit /vz/private/1001/etc/pam.d/chpasswd and change the password lines to:

password required pam_unix.so md5 shadow

Test

vzctl start 1001
vzctl enter 1001

You can check running services:

rc-status -a

All services in boot and default runlevels must be started.

Enable SSH daemon if required:

rc-update add sshd default
Warning.svg Warning: Do not start sshd in template container as it would create server's pair of keys then shared among all containers instantiated later.

Next leave container pressing Ctrl+D and stop it:

vzctl stop 1001

Making distfiles and portage tree of the host system available in a container

Warning.svg Warning: This step is optional and will result in shared files between containers! These steps can save space on disk but trade isolation and security... consider your options carefully!

To install software into a container with portage, you should mount /usr/portage into the container with the "bind" option. Do the following on the host after the container is started:

mkdir /vz/root/1001/usr/portage
mount -o bind /usr/portage /vz/root/1001/usr/portage

If your /usr/portage/distfiles directory resides on a different partition than your /usr/portage directory, do the following:

mount -n -o bind /usr/portage/distfiles /vz/root/1001/usr/portage/distfiles

Now, to install a package into a container, you just need to enter the container using vzctl enter and run

emerge package_name

while you have all the needed files in the /usr/portage/distfiles of host system.

For security reasons, you should have these directories mounted only while installing software into a container.

Yellowpin.svg Note: you have to umount /vz/root/1001/usr/portage/distfiles before trying to stop your container.

Dedicated installation of portage

If you decide not to share portage with host as described before, you'll still need a portage installed into your container.

Get latest snapshot of portage tree from your favourite mirror (http://www.gentoo.org/main/en/mirrors.xml) and extract it into /vz/private/1001/usr:

# wget http://distfiles.gentoo.org/releases/snapshots/current/portage-latest.tar.bz2
# tar xjf portage-latest.tar.bz2 -C /vz/private/1001/usr

Host system portage tree and distfiles in read-only mode

You can safely share portage tree from the host system among all Gentoo VPSs by mounting it in read-only mode and defining dedicated distfiles directory. All files in regular distfiles directory will be also available to guest containers.

Create /etc/vz/conf/vps.mount to mount RO portage to all Gentoo guests or /etc/vz/conf/<vps id>.mount to mount portage tree only to particular container:

#!/bin/bash
source /etc/vz/vz.conf
source ${VE_CONFFILE}
if [ -d /vz/root/$VEID/usr/portage ]; then
    mount -n --bind -o ro /vz/portage /vz/root/$VEID/usr/portage
fi

Make it executable:

chmod u+x /etc/vz/conf/vps.mount


Add the following strings to the /vz/private/1001/etc/make.conf:

PORTAGE_RO_DISTDIRS="/usr/portage/distfiles"
DISTDIR="/usr/portage_distfiles"

You should update host-node portage tree on regular basis to keep it up to date because emerge --sync won't work inside guest container.

Create the template cache file

cd /vz/private/1001/
tar --numeric-owner -czf /vz/template/cache/gentoo.tar.gz *

Test the new template cache file

Create a new container from the template file:

vzctl create 800 --config gentoo --ipadd 192.168.0.10 --hostname testvps

If the container was created successfully, try to start it:

vzctl start 800

If it started, and you can enter it using

vzctl enter 800

congratulations, you've got a working Gentoo template!

Log in over SSH

Leave container by hitting Ctrl+D. To log in over SSH now, you need to set root's password in running container first:

vzctl set 800 --userpasswd root:secret

Of course, you should use different password (replacing secret above) obeying common rules for strong passwords. After that container is ready for login over SSH

ssh root@192.168.0.10