Editing Gentoo template creation

Jump to: navigation, search

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision Your text
Line 1: Line 1:
This page is about making a template cache for OpenVZ container from Gentoo Linux. The method is basically the same as described in [[Slackware template creation]] article.
+
This page is about making a template cache for OpenVZ VE from Gentoo Linux. The method is basically the same as described in [[Slackware template creation]] article.
  
== Download stage3 ==
+
===Download stage3===
  
We will make the template from a stage3 file. An OpenVZ OS template should be an archive (.tar.gz) of the root of a working system, but without the kernel and some files. You can download stage3 from the nearest mirror from http://www.gentoo.org/main/en/mirrors.xml
+
We shall make the template from stage3 file. OpenVZ OS template should be an archive of root of the working system, but without the kernel and some files. You can download stage3 from the nearest mirror here: http://www.gentoo.org/main/en/mirrors.xml.
or directly from http://distfiles.gentoo.org/releases/x86/current-stage3/
 
  
----
+
===Create directory for the new VE and unarchive stage3 ===
 
 
== Or try to Download 64 bit stage3 ==
 
 
 
If you experience with 32bit containers you can also try to create Gentoo template with 64bits binary support. Try to download '''64bit''' stage3. For 64bit Gentoo template creation, search for nearest mirrors http://www.gentoo.org/main/en/mirrors.xml or directly from:
 
 
 
http://distfiles.gentoo.org/releases/amd64/autobuilds/current-stage3/ with 32bit binary '''multilib''' support, or
 
 
 
 
 
http://distfiles.gentoo.org/releases/amd64/autobuilds/current-stage3/hardened/ (stage3-amd64-hardened+nomultilib-20xxxxxx.tar.bz2) try the hardend profile without multilib support (only 64bit binary support for gentoo template containers!!)
 
 
 
 
 
Don't forget that hostnode must support 64bit binary too, with or without 32bit multilib support!! Hostnodes that supports multilib can start 64bits containers with 32bits containers but with a '''''slight''''' performance degradation.
 
 
 
Don't forget to look for:
 
<pre>ACCEPT_KEYWORDS="amd64" in /etc/make.conf</pre> there you can accept 64bit binary packages support for your containers.
 
<pre>cat /proc/cpuinfo</pre> for 64 bits Intel\AMD CPU instructions set support.
 
 
 
 
 
'''WARNING''': There is no warrantied that template should work, you can bugtack the errors. But I haven't confronted with problem.
 
 
 
== Create directories for the new container and unarchive stage3 ==
 
  
 
<pre>
 
<pre>
mkdir /vz/root/1001
+
mkdir /vz/private/777
mkdir /vz/private/1001
+
tar -xjf /root/stage3-i686-2006.0.tar.bz2 -C /vz/private/777
tar -xvjpf /root/stage3-i686-20111213.tar.bz2 -C /vz/private/1001
 
 
</pre>
 
</pre>
  
== Create CT config ==
+
===Create VE config===
Now you need to create the configuration file for the container, 1001.conf:  
+
Now you need to create the configuration file for the VE, 777.conf:  
  
 
<pre>
 
<pre>
vzctl set 1001 --applyconfig basic --save
+
vzctl set 777 --applyconfig vps.basic --save
 
</pre>
 
</pre>
  
Gentoo users wil see warning, but nothing to worry just ignore it
+
===Edit config===
<pre>
 
WARNING: /etc/vz/conf/1001.conf not found: No such file or directory
 
</pre>
 
  
If you get the following error, you need to change the file /etc/vz/vz.conf to "VE_LAYOUT=simfs". Unfortunately, I couldn't find a solution for ploop.
+
Add to the /etc/vz/conf/777.conf:
 
<pre>
 
<pre>
# vzctl set 1001 --applyconfig basic --save
+
DISTRIBUTION="gentoo"
Error in ploop_open_dd (di.c:288): Can't resolve /vz/private/1001/root.hdd/DiskDescriptor.xml: No such file or directory
+
OSTEMPLATE="gentoo"
Failed to read /vz/private/1001/root.hdd/DiskDescriptor.xml
 
Error: failed to apply some parameters, not saving configuration file!
 
 
</pre>
 
</pre>
== Edit CT Config ==
 
 
First, you need to let vzctl know that this CT is using Gentoo:
 
 
echo 'OSTEMPLATE="gentoo"' >> /etc/vz/conf/1001.conf
 
 
Creation of container at end of this HowTo obeys quota limits and might exceed
 
those limits set in <code>vps.basic</code> by default (at least encountered with Gentoo
 
10.1 release). Thus it might be required to increase limits now. The following
 
values are providing 2 GB soft limit with 2.5 GB hard limit:
 
 
DISKSPACE="2.4G:2.5G"
 
 
If you use independed Gentoo portage tree for each container, is considered correct in the use of gentoo containers, don't forget to raise inodes number
 
 
DISKINODES="400000:420000"
 
 
You should also increase the ram to a minimum of 512 MB. Otherwise, you will get errors during compilation. Since vzctl 3.0.30 you can do:
 
 
vzctl set 1001 --ram 512M --swap 1G --save
 
 
Prior to vzctl 3.0.30 you have to do, which gives you 512 MB guaranteed and 1024 MB burstable:
 
 
vzctl set 1001 --vmguarpages 512M --save
 
vzctl set 1001 --oomguarpages 512M --save
 
vzctl set 1001 --privvmpages 512M:1024M --save
 
vzctl set 1001 --swappages 0:1024M --save
 
 
Independed Gentoo portage tree for each container would be good idea, because newer portage can delete older ebuilds already installed in container with other dependences.
 
Otherwise you can't reinstall already installed packages if you bind the newer version of portage in gentoo containers.
 
 
After that you copy that configuration file turning it into a sample configuration for later use:
 
 
# cp /etc/vz/conf/1001.conf /etc/vz/conf/ve-gentoo.conf-sample
 
 
== Make /etc/mtab a symlink to /proc/mounts ==
 
The container's root filesystem is mounted by the host system, not the guest — and therefore root fs will not appear in <code>/etc/mtab</code>. It will lead to a non-working <code>df</code> command. To fix, link /etc/mtab to /proc/mounts.
 
<pre>
 
rm -f /vz/private/1001/etc/mtab
 
ln -s /proc/mounts /vz/private/1001/etc/mtab
 
</pre>
 
After replacing <code>/etc/mtab</code> with a symlink to <code>/proc/mounts</code>, you will always have up-to-date information of what is mounted in <code>/etc/mtab</code>. You will, however, have an error on boot (in <code>/var/log/init.log</code>) that can be safely ignored: <code>* /etc/mtab is not updateable [ !! ]</code>
 
 
== Replace /etc/fstab ==
 
  
 +
===Make /etc/mtab a symlink to /proc/mounts===
 +
The VE root filesystem is mounted by the host system, not the guest -- and therefore root fs will not appear in /etc/mtab. It will lead to df command non-working.
 
<pre>
 
<pre>
echo "proc /proc proc defaults 0 0" > /vz/private/1001/etc/fstab
+
rm -f /vz/private/777/etc/mtab
 +
ln -s /proc/mounts /vz/private/777/etc/mtab
 
</pre>
 
</pre>
 +
After replacing /etc/mtab with a symlink to /proc/mounts, you will always have up-to-date information of what is mounted in /etc/mtab.
  
We need only <code>/proc</code> to be mounted at boot time.
+
===Edit /etc/fstab===
 
 
== Edit /etc/inittab and /etc/init.d/halt.sh ==
 
 
 
Edit <code>/vz/private/1001/etc/inittab</code> and put a hash mark (#) at the beginning of the lines containing:
 
  
 
<pre>
 
<pre>
# TERMINALS
+
echo "proc /proc proc defaults 0 0" > /vz/private/777/etc/fstab
c1:12345:respawn:/sbin/agetty 38400 tty1 linux
 
c2:2345:respawn:/sbin/agetty 38400 tty2 linux
 
c3:2345:respawn:/sbin/agetty 38400 tty3 linux
 
c4:2345:respawn:/sbin/agetty 38400 tty4 linux
 
c5:2345:respawn:/sbin/agetty 38400 tty5 linux
 
c6:2345:respawn:/sbin/agetty 38400 tty6 linux
 
 
</pre>
 
</pre>
  
Just like that:
+
We need only <code>/proc</code> to be mounted at the boot time.
<pre>
 
# TERMINALS
 
#c1:12345:respawn:/sbin/agetty 38400 tty1 linux
 
#c2:2345:respawn:/sbin/agetty 38400 tty2 linux
 
#c3:2345:respawn:/sbin/agetty 38400 tty3 linux
 
#c4:2345:respawn:/sbin/agetty 38400 tty4 linux
 
#c5:2345:respawn:/sbin/agetty 38400 tty5 linux
 
#c6:2345:respawn:/sbin/agetty 38400 tty6 linux
 
</pre>
 
  
Edit <code>/vz/private/1001/etc/init.d/halt.sh</code> and put a hash mark (#) at the beginning of the lines containing:
+
===Edit /etc/inittab===
  
<pre>sulogin -t 10 /dev/console</pre>
+
Edit <code>/vz/private/777/etc/inittab</code>, putting a hashmark (#) before the lines containing:
  
File <code>/vz/private/1001/etc/init.d/halt.sh</code> is deleted in Gentoo 11.2 and doesn't need to edit.
+
<pre>c?:1235:respawn:/sbin/agetty 38400 tty? linux</pre>
  
This prevents <code>getty</code> and login from starting on ttys that do not exist in containers.
+
This prevents from starting <code>getty</code> and login on ttys that does not exist in VEs.
  
== Edit /etc/shadow ==
+
===Edit /etc/shadow===
  
Edit <code>/vz/private/1001/etc/shadow</code> and change root's password in the first line to an exclamation mark (!):  
+
Edit <code>/vz/private/777/etc/shadow</code>, change root's password in the first line to an exclamation mark (!):  
  
 
<pre>root:!:10071:0:::::</pre>
 
<pre>root:!:10071:0:::::</pre>
  
This will disable root login until the password is changed with <code>vzctl set CTID --userpasswd root:password</code>.
+
This will disable the root login until the password changed with <code>vzctl set VEID --userpasswd root:password</code>.
  
== Disable unneeded init scripts ==
+
===Edit /etc/init.d/checkroot===
  
The checkroot and consolefont init scripts should not be started inside containers:
+
The checkroot script should not be enabled on boot
(NOT! for Gentoo 11.2)
 
<pre>
 
rm /vz/private/1001/etc/runlevels/boot/checkroot
 
rm /vz/private/1001/etc/runlevels/boot/consolefont
 
</pre>
 
  
Gentoo 11.2 release have an option in '''rc.conf'''
 
just uncoment rc_sys and type "openvz" and it disables init scripts
 
 
<pre>
 
<pre>
nano /vz/private/1001/etc/rc.conf
+
rm /vz/private/777/etc/runlevels/boot/checkroot
rc_sys="openvz"
 
 
</pre>
 
</pre>
  
== Edit /sbin/rc ==
+
===Edit /sbin/rc===
 
 
Edit <code>/vz/private/1001/sbin/rc</code> and put a hash mark (#) at the beginning of line 244 (your line number may be different):
 
 
 
<pre># try mount -n ${mntcmd:--t sysfs sysfs /sys -o noexec,nosuid,nodev}</pre>
 
 
 
This prevents the container from attempting to mount <code>/sys</code>.
 
 
 
To ensure that this change isn't automatically overwritten on update, add the following to <code>/vz/private/1001/etc/make.conf</code>:
 
 
 
<pre>CONFIG_PROTECT="/sbin/rc"</pre>
 
  
'''Gentoo 11.2''' <code>/vz/private/1001/sbin/rc</code> is '''binary''', i just skipped this post
+
Comment line number 141 in /vz/private/777/sbin/rc:
  
== Set up udev ==
+
<pre>try mount -n ${mntcmd:--t sysfs sysfs /sys}</pre>
 
 
Using udev you will have problems since some devices nodes are not created.
 
For example sshd will fail to start since /dev/random and /dev/urandom are missing.
 
So it's recommended to disable udev.
 
Edit <code>/vz/private/1001/etc/conf.d/rc</code> or <code>/vz/private/1001/etc/conf.d/udev</code> if you using Gentoo 11.2 or future and change the <code>RC_DEVICES</code> line to:
 
<pre>
 
RC_DEVICES="static"
 
</pre>
 
  
'''Baselayout 2 and OpenRC:''' <code>/vz/private/1001/etc/conf.d/rc</code> is obsolete (http://www.gentoo.org/doc/en/openrc-migration.xml#doc_chap2_sect2) and <code>/vz/private/1001/etc/rc.conf</code> should be used instead. But, RC_DEVICES is missing in <code>/vz/private/1001/etc/rc.conf</code>?!?
+
This prevents from attepting to mount <code>/sys</code>.
  
If you want to enable udev read on.
+
===Set up udev===
  
Create some device nodes needed to enter a container:
+
Delete /lib/udev-state/devices.tar.bz2 and create some device nodes needed to enter a VE:
  
 
<pre>
 
<pre>
cd /vz/private/1001/lib
+
cd /vz/private/777/lib
 +
rm udev-state/devices.tar.bz2
 
mknod udev/devices/ttyp0 c 3 0
 
mknod udev/devices/ttyp0 c 3 0
 
mknod udev/devices/ptyp0 c 2 0
 
mknod udev/devices/ptyp0 c 2 0
 
mknod udev/devices/ptmx c 5 2
 
mknod udev/devices/ptmx c 5 2
 
</pre>
 
</pre>
 +
Set RC_DEVICES="static" in /vz/private/777/etc/conf.d/rc
  
Edit <code>/vz/private/1001/etc/conf.d/rc</code> or <code>/vz/private/1001/etc/conf.d/udev</code> if you using Gentoo 11.2 or future and change the <code>RC_DEVICES</code> and <code>RC_DEVICE_TARBALL</code> lines to:
+
You have to leave the directory you are in for the next step to be ok, otherwise you will get this error message : <br>
 
+
vzquota : (error) Quota on syscall for 777: Device or resource busy <br>
<pre>
+
vzquota on failed [3] <br>
RC_DEVICES="udev"
 
RC_DEVICE_TARBALL="no"
 
</pre>
 
 
 
You have to leave the directory you are in for the next step to be OK, otherwise you will get this error message:
 
vzquota : (error) Quota on syscall for 1001: Device or resource busy
 
vzquota on failed [3]
 
  
 
<pre>
 
<pre>
Line 214: Line 96:
 
</pre>
 
</pre>
  
== Edit /etc/pam.d/chpasswd ==
+
===Test===
 
 
Some changes are required for successful setting user's password with <code>vzctl</code> util.
 
Edit <code>/vz/private/1001/etc/pam.d/chpasswd</code> and change the <code>password</code> lines to:
 
  
 
<pre>
 
<pre>
password required pam_unix.so md5 shadow
+
vzctl start 777
 +
vzctl enter 777
 
</pre>
 
</pre>
  
== Test ==
+
You can check running services.
 
 
<pre>
 
vzctl start 1001
 
vzctl enter 1001
 
</pre>
 
 
 
You can check running services:
 
  
 
<pre>
 
<pre>
Line 236: Line 109:
 
</pre>
 
</pre>
  
All services in boot and default runlevels must be started.
+
All services in boot and default runlevels must be started. If everything all right, stop it
 
 
Enable SSH daemon if required:
 
 
 
<pre>
 
rc-update add sshd default
 
</pre>
 
 
 
{{Warning|'''Do not start sshd''' in template container as it would create server's pair of keys then shared among all containers instantiated later.}}
 
 
 
Next leave container pressing Ctrl+D and stop it:
 
  
 
<pre>
 
<pre>
vzctl stop 1001
+
vzctl stop 777
 
</pre>
 
</pre>
  
== Making distfiles and portage tree of the host system available in a container ==
+
===Make distfiles and portage tree of the host system available in a VE===
  
{{Warning|This step is optional and will result in shared files between containers! These steps can save space on disk but trade isolation and security... consider your options carefully!}}
+
{{Note|This step is optional and will result in shared files between VEs! These steps can save space on disk but trade isolation and security... consider your options carefully!}}
  
To install software into a container with portage, you should mount <code>/usr/portage</code> into the container with the "bind" option. Do the following on the host after the container is started:
+
To install software into a VE with portage you should mount /usr/portage into VE with "bind" option. Do this after VE starts:
  
 
<pre>
 
<pre>
mkdir /vz/root/1001/usr/portage
+
mkdir /vz/root/777/usr/portage
mount -o bind /usr/portage /vz/root/1001/usr/portage
+
mount -o bind /usr/portage /vz/root/777/usr/portage
 
</pre>
 
</pre>
  
If your <code>/usr/portage/distfiles</code> directory resides on a different partition than your <code>/usr/portage</code> directory, do the following:
+
If your /usr/portage/distfiles placed on the other partition do:
  
 
<pre>
 
<pre>
mount -n -o bind /usr/portage/distfiles /vz/root/1001/usr/portage/distfiles
+
mount -n -o bind /usr/portage/distfiles /vz/root/777/usr/portage/distfiles
 
</pre>
 
</pre>
  
Now, to install a package into a container, you just need to enter the container using <code>vzctl enter</code> and run
+
Now, to install package into a VE you just need enter there by <code>vzctl enter</code> and run
  
 
<pre>
 
<pre>
Line 277: Line 140:
 
while you have all the needed files in the <code>/usr/portage/distfiles</code> of host system.
 
while you have all the needed files in the <code>/usr/portage/distfiles</code> of host system.
  
For security reasons, you should have these directories mounted only while installing software into a container.
+
For security reasons hold this directories mounted only while you are installing software into a VE.
 
 
{{Note|you have to <code>umount /vz/root/1001/usr/portage/distfiles</code> before trying to stop your container.}}
 
 
 
== Dedicated installation of portage ==
 
 
 
If you decide not to share portage with host as described before, you'll still need a portage installed into your container.
 
 
 
Get latest snapshot of portage tree from your favourite mirror (http://www.gentoo.org/main/en/mirrors.xml) and extract it into <code>/vz/private/1001/usr</code>:
 
 
 
<pre>
 
# wget http://distfiles.gentoo.org/releases/snapshots/current/portage-latest.tar.bz2
 
# tar xjf portage-latest.tar.bz2 -C /vz/private/1001/usr
 
</pre>
 
 
 
== Host system portage tree and distfiles in read-only mode ==
 
 
 
You can safely share portage tree from the host system among all Gentoo VPSs by mounting it in read-only mode and defining dedicated <code>distfiles</code> directory. All files in regular <code>distfiles</code> directory will be also available to guest containers.
 
 
 
Create <code>/etc/vz/conf/vps.mount</code> to mount RO portage to all Gentoo guests or <code>/etc/vz/conf/<vps id>.mount</code> to mount portage tree only to particular container:
 
 
 
<pre>
 
#!/bin/bash
 
source /etc/vz/vz.conf
 
source ${VE_CONFFILE}
 
if [ -d /vz/root/$VEID/usr/portage ]; then
 
    mount -n --bind -o ro /vz/portage /vz/root/$VEID/usr/portage
 
fi
 
</pre>
 
 
 
Make it executable:
 
 
 
<pre>
 
chmod u+x /etc/vz/conf/vps.mount
 
</pre>
 
 
 
 
 
Add the following strings to the <code>/vz/private/1001/etc/make.conf</code>:
 
 
 
<pre>
 
PORTAGE_RO_DISTDIRS="/usr/portage/distfiles"
 
DISTDIR="/usr/portage_distfiles"
 
</pre>
 
  
You should update host-node portage tree on regular basis to keep it up to date because <code>emerge --sync</code> won't work inside guest container.
+
{{Note|you have to <code>umount /vz/root/777/usr/portage/distfiles</code> before trying to stop your VE.}}
  
== Create the template cache file ==
+
===Create the cache file===
  
 
<pre>
 
<pre>
cd /vz/private/1001/
+
cd /vz/private/777/
tar --numeric-owner -czf /vz/template/cache/gentoo.tar.gz *
+
tar czf /vz/template/cache/gentoo.tar.gz *
 
</pre>
 
</pre>
  
== Test the new template cache file ==
+
===Test the new cache file===
 
 
Create a new container from the template file:
 
  
 
<pre>
 
<pre>
vzctl create 800 --config gentoo --ipadd 192.168.0.10 --hostname testvps
+
vzctl create 800 --ostemplate gentoo --ipadd 192.168.0.10 --hostname testvps
 
</pre>
 
</pre>
  
If the container was created successfully, try to start it:  
+
If created successfully, try to start it:  
  
 
<pre>
 
<pre>
Line 344: Line 163:
 
</pre>
 
</pre>
  
If it started, and you can enter it using
+
If it started, and you can ssh in, congratulations, you've got a working Gentoo template!
 
 
<pre>
 
vzctl enter 800
 
</pre>
 
 
 
congratulations, you've got a working Gentoo template!
 
 
 
== Log in over SSH ==
 
 
 
Leave container by hitting Ctrl+D. To log in over SSH now, you need to set root's password in running container first:
 
 
 
<pre>
 
vzctl set 800 --userpasswd root:secret
 
</pre>
 
 
 
Of course, you should use different password (replacing <code>secret</code> above) obeying common rules for strong passwords. After that container is ready for login over SSH
 
 
 
<pre>
 
ssh root@192.168.0.10
 
</pre>
 
 
 
 
 
  
 
[[Category: HOWTO]]
 
[[Category: HOWTO]]
 
[[Category: Templates]]
 
[[Category: Templates]]
[[Category: Gentoo]]
 

Please note that all contributions to OpenVZ Virtuozzo Containers Wiki may be edited, altered, or removed by other contributors. If you don't want your writing to be edited mercilessly, then don't submit it here.
If you are going to add external links to an article, read the External links policy first!

To edit this page, please answer the question that appears below (more info):

Cancel Editing help (opens in new window)

Templates used on this page: