Difference between revisions of "Gentoo template creation"
(→Edit /etc/inittab and /etc/init.d/halt.sh) |
(→Edit CT Config) |
||
Line 41: | Line 41: | ||
</pre> | </pre> | ||
− | But if you using Gentoo 11. | + | But if you using Gentoo 11.2 release and future you can skip the increasing the disk space limits, by deafult you have 2 GiByte soft limit and 2.3 GiByte hard limit. |
After that you copy that configuration file turning it into a sample configuration for later use: | After that you copy that configuration file turning it into a sample configuration for later use: |
Revision as of 14:12, 27 December 2011
This page is about making a template cache for OpenVZ container from Gentoo Linux. The method is basically the same as described in Slackware template creation article.
Contents
- 1 Download stage3
- 2 Create directory for the new container and unarchive stage3
- 3 Create CT config
- 4 Edit CT Config
- 5 Make /etc/mtab a symlink to /proc/mounts
- 6 Replace /etc/fstab
- 7 Edit /etc/inittab and /etc/init.d/halt.sh
- 8 Edit /etc/shadow
- 9 Disable unneeded init scripts
- 10 Edit /sbin/rc
- 11 Set up udev
- 12 Edit /etc/pam.d/chpasswd
- 13 Test
- 14 Making distfiles and portage tree of the host system available in a container
- 15 Dedicated installation of portage
- 16 Host system portage tree and distfiles in read-only mode
- 17 Create the template cache file
- 18 Test the new template cache file
- 19 Log in over SSH
Download stage3
We will make the template from a stage3 file. An OpenVZ OS template should be an archive (.tar.gz) of the root of a working system, but without the kernel and some files. You can download stage3 from the nearest mirror here: http://www.gentoo.org/main/en/mirrors.xml. or here [http://distfiles.gentoo.org/releases/x86/current-stage3/ ]
Create directory for the new container and unarchive stage3
mkdir /vz/private/1001 tar -xjf /root/stage3-i686-20111213.tar.bz2 -C /vz/private/1001
Create CT config
Now you need to create the configuration file for the container, 1001.conf:
vzctl set 1001 --applyconfig basic --save
Gentoo users wil see warning, but nothing to wary just ignore it
WARNING: /etc/vz/conf/1001.conf not found: No such file or directory
Edit CT Config
Add the following to /etc/vz/conf/1001.conf
:
echo 'OSTEMPLATE="gentoo"' >>/etc/vz/conf/1001.conf
Creation of container at end of this HowTo obeys quota limits and might exceed
those limits set in vps.basic
by default (at least encountered with Gentoo
10.1 release). Thus it might be required to increase limits now. The following
values are providing 2 GiByte soft limit with 2.5 GiByte hard limit:
DISKSPACE="2097152:2621440"
But if you using Gentoo 11.2 release and future you can skip the increasing the disk space limits, by deafult you have 2 GiByte soft limit and 2.3 GiByte hard limit.
After that you copy that configuration file turning it into a sample configuration for later use:
# cp /etc/vz/conf/1001.conf /etc/vz/conf/ve-gentoo.conf-sample
Make /etc/mtab a symlink to /proc/mounts
The container's root filesystem is mounted by the host system, not the guest — and therefore root fs will not appear in /etc/mtab
. It will lead to a non-working df
command. To fix, link /etc/mtab to /proc/mounts.
rm -f /vz/private/1001/etc/mtab ln -s /proc/mounts /vz/private/1001/etc/mtab
After replacing /etc/mtab
with a symlink to /proc/mounts
, you will always have up-to-date information of what is mounted in /etc/mtab
. You will, however, have an error on boot (in /var/log/init.log
) that can be safely ignored: * /etc/mtab is not updateable [ !! ]
Replace /etc/fstab
echo "proc /proc proc defaults 0 0" > /vz/private/1001/etc/fstab
We need only /proc
to be mounted at boot time.
Edit /etc/inittab and /etc/init.d/halt.sh
Edit /vz/private/1001/etc/inittab
and put a hash mark (#) at the beginning of the lines containing:
# TERMINALS c1:12345:respawn:/sbin/agetty 38400 tty1 linux c2:2345:respawn:/sbin/agetty 38400 tty2 linux c3:2345:respawn:/sbin/agetty 38400 tty3 linux c4:2345:respawn:/sbin/agetty 38400 tty4 linux c5:2345:respawn:/sbin/agetty 38400 tty5 linux c6:2345:respawn:/sbin/agetty 38400 tty6 linux
Just like that:
# TERMINALS #c1:12345:respawn:/sbin/agetty 38400 tty1 linux #c2:2345:respawn:/sbin/agetty 38400 tty2 linux #c3:2345:respawn:/sbin/agetty 38400 tty3 linux #c4:2345:respawn:/sbin/agetty 38400 tty4 linux #c5:2345:respawn:/sbin/agetty 38400 tty5 linux #c6:2345:respawn:/sbin/agetty 38400 tty6 linux
Edit /vz/private/1001/etc/init.d/halt.sh
and put a hash mark (#) at the beginning of the lines containing:
sulogin -t 10 /dev/console
File /vz/private/1001/etc/init.d/halt.sh
is deleted in Gentoo 11.2 and doesn't need to edit.
This prevents getty
and login from starting on ttys that do not exist in containers.
Edit /etc/shadow
Edit /vz/private/777/etc/shadow
and change root's password in the first line to an exclamation mark (!):
root:!:10071:0:::::
This will disable root login until the password is changed with vzctl set CTID --userpasswd root:password
.
Disable unneeded init scripts
The checkroot and consolefont init scripts should not be started inside containers:
rm /vz/private/777/etc/runlevels/boot/checkroot rm /vz/private/777/etc/runlevels/boot/consolefont
Edit /sbin/rc
Edit /vz/private/777/sbin/rc
and put a hash mark (#) at the beginning of line 244 (your line number may be different):
# try mount -n ${mntcmd:--t sysfs sysfs /sys -o noexec,nosuid,nodev}
This prevents the container from attempting to mount /sys
.
To ensure that this change isn't automatically overwritten on update, add the following to /vz/private/777/etc/make.conf
:
CONFIG_PROTECT="/sbin/rc"
Set up udev
Using udev you will have problems since some devices nodes are not created.
For example sshd will fail to start since /dev/random and /dev/urandom are missing.
So it's recommended to disable udev.
Edit /vz/private/777/etc/conf.d/rc
and change the RC_DEVICES
line to:
RC_DEVICES="static"
If you want to enable udev read on.
Create some device nodes needed to enter a container:
cd /vz/private/777/lib mknod udev/devices/ttyp0 c 3 0 mknod udev/devices/ptyp0 c 2 0 mknod udev/devices/ptmx c 5 2
Edit /vz/private/777/etc/conf.d/rc
and change the RC_DEVICES
and RC_DEVICE_TARBALL
lines to:
RC_DEVICES="udev" RC_DEVICE_TARBALL="no"
You have to leave the directory you are in for the next step to be OK, otherwise you will get this error message:
vzquota : (error) Quota on syscall for 777: Device or resource busy vzquota on failed [3]
cd /
Edit /etc/pam.d/chpasswd
Some changes are required for successful setting user's password with vzctl
util.
Edit /vz/private/777/etc/pam.d/chpasswd
and change the password
lines to:
password required pam_unix.so md5 shadow
Test
vzctl start 777 vzctl enter 777
You can check running services:
rc-status -a
All services in boot and default runlevels must be started.
Enable SSH daemon if required:
rc-update add sshd default
Warning: Do not start sshd in template container as it would create server's pair of keys then shared among all containers instantiated later. |
Next leave container pressing Ctrl+D and stop it:
vzctl stop 777
Making distfiles and portage tree of the host system available in a container
Warning: This step is optional and will result in shared files between containers! These steps can save space on disk but trade isolation and security... consider your options carefully! |
To install software into a container with portage, you should mount /usr/portage
into the container with the "bind" option. Do the following on the host after the container is started:
mkdir /vz/root/777/usr/portage mount -o bind /usr/portage /vz/root/777/usr/portage
If your /usr/portage/distfiles
directory resides on a different partition than your /usr/portage
directory, do the following:
mount -n -o bind /usr/portage/distfiles /vz/root/777/usr/portage/distfiles
Now, to install a package into a container, you just need to enter the container using vzctl enter
and run
emerge package_name
while you have all the needed files in the /usr/portage/distfiles
of host system.
For security reasons, you should have these directories mounted only while installing software into a container.
Note: you have to umount /vz/root/777/usr/portage/distfiles before trying to stop your container.
|
Dedicated installation of portage
If you decide not to share portage with host as described before, you'll still need a portage installed into your container.
Get latest snapshot of portage tree from your favourite mirror (http://www.gentoo.org/main/en/mirrors.xml) and extract it into /vz/private/777/usr
:
# wget <your-mirro>/snapshots/portage-latest.tar.bz2 # tar xjf portage-latest.tar.bz2 -C /vz/private/777/usr
Host system portage tree and distfiles in read-only mode
You can safely share portage tree from the host system among all Gentoo VPSs by mounting it in read-only mode and defining dedicated distfiles
directory. All files in regular distfiles
directory will be also available to guest containers.
Create /etc/vz/conf/vps.mount
to mount RO portage to all Gentoo guests or /etc/vz/conf/<vps id>.mount
to mount portage tree only to particular container:
#!/bin/bash source /etc/vz/vz.conf source ${VE_CONFFILE} if [ -d /vz/root/$VEID/usr/portage ]; then mount -n --bind -o ro /vz/portage /vz/root/$VEID/usr/portage fi
Make it executable:
chmod u+x /etc/vz/conf/vps.mount
Add the following strings to the /vz/private/777/etc/make.conf
:
PORTAGE_RO_DISTDIRS="/usr/portage/distfiles" DISTDIR="/usr/portage_distfiles"
You should update host-node portage tree on regular basis to keep it up to date because emerge --sync
won't work inside guest container.
Create the template cache file
cd /vz/private/777/ tar --numeric-owner -czf /vz/template/cache/gentoo.tar.gz *
Test the new template cache file
Create a new container from the template file:
vzctl create 800 --config gentoo --ipadd 192.168.0.10 --hostname testvps
If the container was created successfully, try to start it:
vzctl start 800
If it started, and you can enter it using
vzctl enter 800
congratulations, you've got a working Gentoo template!
Log in over SSH
Leave container by hitting Ctrl+D. To log in over SSH now, you need to set root's password in running container first:
vzctl set 800 --userpasswd root:secret
Of course, you should use different password (replacing secret
above) obeying common rules for strong passwords. After that container is ready for login over SSH
ssh root@192.168.0.10