6,534
edits
Changes
much more info added
This article describes a way to deploy a Kubernetes cluster on a few CentOS 7 machines, using Docker with ploop graphdriver.
{{Stub}}
* CentOS 7 minimal installed
* firewalld stopped
* ntpd installed and running For Docker to work with ploop (on the nodes), you also need: * ext4 filesystem on /var/lib* vzkernel installed and booted into* ploop installed* docker with ploop graphdriver installed == CentOS 7 installation == 1. Select "disk", "manual setup", "standard partitioning", "automatically create partitions", then change xfs to ext4 for / and /boot. 2. After reboot, login and edit <code>/etc/sysconfig/network-scripts/ifcfg-eth0</code>file, making sure it has the following line: ONBOOT=yes 3. Enable networking: ifup eth0 4. Update the system: yum update -y 5. Disable firewalld: systemctl stop firewalld; systemctl disable firewalld</code>* 6. Install and enable ntpd installed and running:: <code> yum -y install ntpd ntp && systemctl start ntpd && systemctl enable ntpd</code>
== Master installation ==
Make sure <code>/etc/kubernetes/apiserver</code> contains this:
for S in etcd kube-apiserver kube-controller-manager kube-scheduler; do
== Set nodes (minions) ==
rpm -ihv https://download.openvz.org/virtuozzo/factory/x86_64/os/Packages/v/vzkernel-3.10.0-229.7.2.vz7.6.9.x86_64.rpm
Or, use the latest vzkernel from https://download.openvz.org/virtuozzo/factory/x86_64/os/Packages/v/ Then reboot into vzkernel: reboot Finally, make sure vzkernel is running: uname -r 3.10.0-229.7.2. Enable vz7.6.9 === Install docker-with ploop repo:graphdriver ===
This is needed for ploop to work. If you don't need ploop, you can skip this step.
First, install docker and ploop:
yum install -y wget
cd /etc/yum.repos.d/
wget https://copr.fedoraproject.org/coprs/kir/docker-ploop/repo/epel-7/kir-docker-ploop-epel-7.repo
yum install ploop docker
DOCKER_STORAGE_OPTIONS="-s ploop"
yum -y install flannel kubernetes-node
=== Configure flannel for master etcd ===
Update the following line in /etc/sysconfig/flanneld to make sure it contains master IP:
FLANNEL_ETCD="http://192.168.122.211:2379"
=== Configure Kubernetes for master node ===
Update the following line in /etc/kubernetes/config to make sure in contains master IP:
KUBE_MASTER="--master=http://192.168.122.211:8080"
=== Configure kubelet ===
In /etc/kubelet/config:
1. Enable it to listen on all interfaces:
KUBELET_ADDRESS="--address=0.0.0.0"
2. Comment out this line to use default hostname:
# KUBELET_HOSTNAME
3. Make sure this points to master node IP:
KUBELET_API_SERVER="--api_servers=http://192.168.122.211:8080"
=== Start needed services ===
Start needed services:
systemctl restart docker
systemctl restart flanneld
systemctl restart kubelet
systemctl restart kube-proxy
NOTE: if 'systemctl restart docker' fails, you might need to run:
systemctl stop docker
ip l del docker0
Enable needed services:
systemctl enable docker
systemctl enable flanneld
systemctl enable kubelet
systemctl enable kube-proxy
== Checking that the system is set up ==
On the master node, check that the needed services are running:
for S in etcd kube-apiserver kube-controller-manager kube-scheduler; do
systemctl status $S
done
On the nodes, check that the needed services are running:
systemctl status docker
systemctl status flanneld
systemctl status kubelet
systemctl status kube-proxy
Finally, check that the nodes are visible and active:
# kubectl get nodes
NAME LABELS STATUS
kube-node1 kubernetes.io/hostname=kube-node1 Ready
kube-node2 kubernetes.io/hostname=kube-node2 Ready
NOTE: if there are some stale nodes listed, you can remove those:
kubectl delete node localhost.localdomain
== See also ==
* [https://github.com/coreos/etcd/blob/master/Documentation/configuration.md etcd configuration]