Setting up Kubernetes cluster
This article describes a way to deploy a Kubernetes cluster on a few CentOS 7 machines, using Docker with ploop graphdriver.
Contents
Prerequisites[edit]
Every system should have:
- CentOS 7 minimal installed
- firewalld stopped
- ntpd installed and running
For Docker to work with ploop (on the nodes), you also need:
- ext4 filesystem on /var/lib
- vzkernel installed and booted into
- ploop installed
- docker with ploop graphdriver installed
CentOS 7 installation[edit]
1. Select "disk", "manual setup", "standard partitioning", "automatically create partitions", then change xfs to ext4 for / and /boot.
2. After reboot, login and edit /etc/sysconfig/network-scripts/ifcfg-eth0
file, making sure it has the following line:
ONBOOT=yes
3. Enable networking:
ifup eth0
4. Update the system:
yum update -y
5. Disable firewalld:
systemctl stop firewalld; systemctl disable firewalld
6. Install and enable ntpd:
yum -y install ntp && systemctl start ntpd && systemctl enable ntpd
Master installation[edit]
To install a master node, you need to do the following:
Install etcd and kubernetes-master[edit]
yum -y install etcd kubernetes-master
Configure etcd[edit]
Make sure /etc/etcd/etcd.conf
contains this line:
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
Configure Kubernetes API server[edit]
Make sure /etc/kubernetes/apiserver
contains this:
Start master node services[edit]
for S in etcd kube-apiserver kube-controller-manager kube-scheduler; do systemctl restart $S systemctl enable $S done
Set nodes (minions)[edit]
Set the node hostname[edit]
For example:
echo kube-node2 > /etc/hostname
Install vzkernel[edit]
This is needed for ploop to work. If you don't need ploop, you can skip this step.
First, install vzkernel:
rpm -ihv https://download.openvz.org/virtuozzo/factory/x86_64/os/Packages/v/vzkernel-3.10.0-229.7.2.vz7.6.9.x86_64.rpm
Or, use the latest vzkernel from https://download.openvz.org/virtuozzo/factory/x86_64/os/Packages/v/
Then reboot into vzkernel:
reboot
Finally, make sure vzkernel is running:
uname -r 3.10.0-229.7.2.vz7.6.9
Install docker with ploop graphdriver[edit]
This is needed for ploop to work. If you don't need ploop, you can skip this step.
First, install docker and ploop:
yum install -y wget cd /etc/yum.repos.d/ wget https://copr.fedoraproject.org/coprs/kir/docker-ploop/repo/epel-7/kir-docker-ploop-epel-7.repo echo "priority=60" >> kir-docker-ploop-epel-7.repo yum install ploop docker
Then, set ploop driver to be default for docker. Make sure /etc/sysconfig/docker-storage
contains:
DOCKER_STORAGE_OPTIONS="-s ploop"
Install flannel and kubernetes-node[edit]
yum -y install flannel kubernetes-node
Configure flannel for master etcd[edit]
Update the following line in /etc/sysconfig/flanneld to make sure it contains master IP:
FLANNEL_ETCD="http://192.168.122.211:2379"
Configure Kubernetes for master node[edit]
Update the following line in /etc/kubernetes/config to make sure in contains master IP:
KUBE_MASTER="--master=http://192.168.122.211:8080"
Configure kubelet[edit]
In /etc/kubelet/config:
1. Enable it to listen on all interfaces:
KUBELET_ADDRESS="--address=0.0.0.0"
2. Comment out this line to use default hostname:
# KUBELET_HOSTNAME
3. Make sure this points to master node IP:
KUBELET_API_SERVER="--api_servers=http://192.168.122.211:8080"
Start needed services[edit]
Start needed services:
systemctl restart docker systemctl restart flanneld systemctl restart kubelet systemctl restart kube-proxy
NOTE: if 'systemctl restart docker' fails, you might need to run:
systemctl stop docker ip l del docker0
Enable needed services:
systemctl enable docker systemctl enable flanneld systemctl enable kubelet systemctl enable kube-proxy
Checking that the system is set up[edit]
On the master node, check that the needed services are running:
for S in etcd kube-apiserver kube-controller-manager kube-scheduler; do systemctl status $S done
On the nodes, check that the needed services are running:
systemctl status docker systemctl status flanneld systemctl status kubelet systemctl status kube-proxy
Finally, check that the nodes are visible and active:
# kubectl get nodes NAME LABELS STATUS kube-node1 kubernetes.io/hostname=kube-node1 Ready kube-node2 kubernetes.io/hostname=kube-node2 Ready
NOTE: if there are some stale nodes listed, you can remove those:
kubectl delete node localhost.localdomain