Gentoo template creation
This page is about making a template cache for OpenVZ container from Gentoo Linux. The method is basically the same as described in Slackware template creation article.
Contents
Download stage3
We will make the template from a stage3 file. An OpenVZ OS template should be an archive (.tar.gz) of the root of a working system, but without the kernel and some files. You can download stage3 from the nearest mirror here: http://www.gentoo.org/main/en/mirrors.xml. or here [http://distfiles.gentoo.org/releases/x86/current-stage3/ ]
Create directory for the new container and unarchive stage3
mkdir /vz/private/1001 tar -xjf /root/stage3-i686-20111213.tar.bz2 -C /vz/private/1001
Create CT config
Now you need to create the configuration file for the container, 1001.conf:
vzctl set 1001 --applyconfig basic --save
Gentoo users wil see warning, but nothing to wary just ignore it
WARNING: /etc/vz/conf/1001.conf not found: No such file or directory
Edit CT Config
Add the following to /etc/vz/conf/1001.conf
:
<code>echo 'OSTEMPLATE="gentoo"' >>/etc/vz/conf/1001.conf</code>
Creation of container at end of this HowTo obeys quota limits and might exceed
those limits set in vps.basic
by default (at least encountered with Gentoo
10.1 release). Thus it might be required to increase limits now. The following
values are providing 2 GiByte soft limit with 2.5 GiByte hard limit:
DISKSPACE="2097152:2621440"
After that you copy that configuration file turning it into a sample configuration for later use:
# cp /etc/vz/conf/1001.conf /etc/vz/conf/ve-gentoo.conf-sample
Make /etc/mtab a symlink to /proc/mounts
The container's root filesystem is mounted by the host system, not the guest — and therefore root fs will not appear in /etc/mtab
. It will lead to a non-working df
command. To fix, link /etc/mtab to /proc/mounts.
rm -f /vz/private/777/etc/mtab ln -s /proc/mounts /vz/private/777/etc/mtab
After replacing /etc/mtab
with a symlink to /proc/mounts
, you will always have up-to-date information of what is mounted in /etc/mtab
. You will, however, have an error on boot (in /var/log/init.log
) that can be safely ignored: * /etc/mtab is not updateable [ !! ]
Replace /etc/fstab
echo "proc /proc proc defaults 0 0" > /vz/private/777/etc/fstab
We need only /proc
to be mounted at boot time.
Edit /etc/inittab and /etc/init.d/halt.sh
Edit /vz/private/777/etc/inittab
and put a hash mark (#) at the beginning of the lines containing:
c?:1235:respawn:/sbin/agetty 38400 tty? linux
Edit /vz/private/777/etc/init.d/halt.sh
and put a hash mark (#) at the beginning of the lines containing:
sulogin -t 10 /dev/console
This prevents getty
and login from starting on ttys that do not exist in containers.
Edit /etc/shadow
Edit /vz/private/777/etc/shadow
and change root's password in the first line to an exclamation mark (!):
root:!:10071:0:::::
This will disable root login until the password is changed with vzctl set CTID --userpasswd root:password
.
Disable unneeded init scripts
The checkroot and consolefont init scripts should not be started inside containers:
rm /vz/private/777/etc/runlevels/boot/checkroot rm /vz/private/777/etc/runlevels/boot/consolefont
Edit /sbin/rc
Edit /vz/private/777/sbin/rc
and put a hash mark (#) at the beginning of line 244 (your line number may be different):
# try mount -n ${mntcmd:--t sysfs sysfs /sys -o noexec,nosuid,nodev}
This prevents the container from attempting to mount /sys
.
To ensure that this change isn't automatically overwritten on update, add the following to /vz/private/777/etc/make.conf
:
CONFIG_PROTECT="/sbin/rc"
Set up udev
Using udev you will have problems since some devices nodes are not created.
For example sshd will fail to start since /dev/random and /dev/urandom are missing.
So it's recommended to disable udev.
Edit /vz/private/777/etc/conf.d/rc
and change the RC_DEVICES
line to:
RC_DEVICES="static"
If you want to enable udev read on.
Create some device nodes needed to enter a container:
cd /vz/private/777/lib mknod udev/devices/ttyp0 c 3 0 mknod udev/devices/ptyp0 c 2 0 mknod udev/devices/ptmx c 5 2
Edit /vz/private/777/etc/conf.d/rc
and change the RC_DEVICES
and RC_DEVICE_TARBALL
lines to:
RC_DEVICES="udev" RC_DEVICE_TARBALL="no"
You have to leave the directory you are in for the next step to be OK, otherwise you will get this error message:
vzquota : (error) Quota on syscall for 777: Device or resource busy vzquota on failed [3]
cd /
Edit /etc/pam.d/chpasswd
Some changes are required for successful setting user's password with vzctl
util.
Edit /vz/private/777/etc/pam.d/chpasswd
and change the password
lines to:
password required pam_unix.so md5 shadow
Test
vzctl start 777 vzctl enter 777
You can check running services:
rc-status -a
All services in boot and default runlevels must be started.
Enable SSH daemon if required:
rc-update add sshd default
Warning: Do not start sshd in template container as it would create server's pair of keys then shared among all containers instantiated later. |
Next leave container pressing Ctrl+D and stop it:
vzctl stop 777
Making distfiles and portage tree of the host system available in a container
Warning: This step is optional and will result in shared files between containers! These steps can save space on disk but trade isolation and security... consider your options carefully! |
To install software into a container with portage, you should mount /usr/portage
into the container with the "bind" option. Do the following on the host after the container is started:
mkdir /vz/root/777/usr/portage mount -o bind /usr/portage /vz/root/777/usr/portage
If your /usr/portage/distfiles
directory resides on a different partition than your /usr/portage
directory, do the following:
mount -n -o bind /usr/portage/distfiles /vz/root/777/usr/portage/distfiles
Now, to install a package into a container, you just need to enter the container using vzctl enter
and run
emerge package_name
while you have all the needed files in the /usr/portage/distfiles
of host system.
For security reasons, you should have these directories mounted only while installing software into a container.
Note: you have to umount /vz/root/777/usr/portage/distfiles before trying to stop your container.
|
Dedicated installation of portage
If you decide not to share portage with host as described before, you'll still need a portage installed into your container.
Get latest snapshot of portage tree from your favourite mirror (http://www.gentoo.org/main/en/mirrors.xml) and extract it into /vz/private/777/usr
:
# wget <your-mirro>/snapshots/portage-latest.tar.bz2 # tar xjf portage-latest.tar.bz2 -C /vz/private/777/usr
Host system portage tree and distfiles in read-only mode
You can safely share portage tree from the host system among all Gentoo VPSs by mounting it in read-only mode and defining dedicated distfiles
directory. All files in regular distfiles
directory will be also available to guest containers.
Create /etc/vz/conf/vps.mount
to mount RO portage to all Gentoo guests or /etc/vz/conf/<vps id>.mount
to mount portage tree only to particular container:
#!/bin/bash source /etc/vz/vz.conf source ${VE_CONFFILE} if [ -d /vz/root/$VEID/usr/portage ]; then mount -n --bind -o ro /vz/portage /vz/root/$VEID/usr/portage fi
Make it executable:
chmod u+x /etc/vz/conf/vps.mount
Add the following strings to the /vz/private/777/etc/make.conf
:
PORTAGE_RO_DISTDIRS="/usr/portage/distfiles" DISTDIR="/usr/portage_distfiles"
You should update host-node portage tree on regular basis to keep it up to date because emerge --sync
won't work inside guest container.
Create the template cache file
cd /vz/private/777/ tar --numeric-owner -czf /vz/template/cache/gentoo.tar.gz *
Test the new template cache file
Create a new container from the template file:
vzctl create 800 --config gentoo --ipadd 192.168.0.10 --hostname testvps
If the container was created successfully, try to start it:
vzctl start 800
If it started, and you can enter it using
vzctl enter 800
congratulations, you've got a working Gentoo template!
Log in over SSH
Leave container by hitting Ctrl+D. To log in over SSH now, you need to set root's password in running container first:
vzctl set 800 --userpasswd root:secret
Of course, you should use different password (replacing secret
above) obeying common rules for strong passwords. After that container is ready for login over SSH
ssh root@192.168.0.10