Difference between revisions of "Setting up Kubernetes cluster"

From OpenVZ Virtuozzo Containers Wiki
Jump to: navigation, search
(Set nodes (minions): some more steps)
(much more info added)
 
Line 1: Line 1:
This article describes a way to deploy a Kubernetes cluster on a few CentOS 7 machines.
+
This article describes a way to deploy a Kubernetes cluster on a few CentOS 7 machines, using Docker with ploop graphdriver.
  
 
{{Stub}}
 
{{Stub}}
Line 8: Line 8:
 
* CentOS 7 minimal installed
 
* CentOS 7 minimal installed
 
* firewalld stopped
 
* firewalld stopped
: <code>systemctl stop firewalld; systemctl disable firewalld</code>
+
* ntpd installed and running
* ntpd installed and running:
+
 
: <code>yum -y install ntpd && systemctl start ntpd && systemctl enable ntpd</code>
+
For Docker to work with ploop (on the nodes), you also need:
 +
* ext4 filesystem on /var/lib
 +
* vzkernel installed and booted into
 +
* ploop installed
 +
* docker with ploop graphdriver installed
 +
 
 +
== CentOS 7 installation ==
 +
 
 +
1. Select "disk", "manual setup", "standard partitioning", "automatically create partitions", then change xfs to ext4 for / and /boot.
 +
 
 +
2. After reboot, login and edit <code>/etc/sysconfig/network-scripts/ifcfg-eth0</code> file, making sure it has the following line:
 +
ONBOOT=yes
 +
 
 +
3. Enable networking:
 +
ifup eth0
 +
 
 +
4. Update the system:
 +
yum update -y
 +
 
 +
5. Disable firewalld:
 +
systemctl stop firewalld; systemctl disable firewalld
 +
 
 +
6. Install and enable ntpd:
 +
yum -y install ntp && systemctl start ntpd && systemctl enable ntpd
 +
 
  
 
== Master installation ==
 
== Master installation ==
Line 30: Line 54:
 
Make sure <code>/etc/kubernetes/apiserver</code> contains this:
 
Make sure <code>/etc/kubernetes/apiserver</code> contains this:
  
4. Start master node services:
+
=== Start master node services ===
  
 
  for S in etcd kube-apiserver kube-controller-manager kube-scheduler; do  
 
  for S in etcd kube-apiserver kube-controller-manager kube-scheduler; do  
Line 39: Line 63:
 
== Set nodes (minions) ==
 
== Set nodes (minions) ==
  
1. Install vzkernel:
+
=== Set the node hostname ===
 +
 
 +
For example:
 +
echo kube-node2 > /etc/hostname
 +
 
 +
=== Install vzkernel ===
 +
 
 +
This is needed for ploop to work. If you don't need ploop, you can skip this step.
 +
 
 +
First, install vzkernel:
 
  rpm -ihv https://download.openvz.org/virtuozzo/factory/x86_64/os/Packages/v/vzkernel-3.10.0-229.7.2.vz7.6.9.x86_64.rpm
 
  rpm -ihv https://download.openvz.org/virtuozzo/factory/x86_64/os/Packages/v/vzkernel-3.10.0-229.7.2.vz7.6.9.x86_64.rpm
  
2. Enable docker-ploop repo:
+
Or, use the latest vzkernel from https://download.openvz.org/virtuozzo/factory/x86_64/os/Packages/v/
 +
 
 +
Then reboot into vzkernel:
 +
reboot
 +
 
 +
Finally, make sure vzkernel is running:
 +
uname -r
 +
3.10.0-229.7.2.vz7.6.9
 +
 
 +
=== Install docker with ploop graphdriver ===
  
 +
This is needed for ploop to work. If you don't need ploop, you can skip this step.
 +
 +
First, install docker and ploop:
 +
yum install -y wget
 
  cd /etc/yum.repos.d/
 
  cd /etc/yum.repos.d/
 
  wget https://copr.fedoraproject.org/coprs/kir/docker-ploop/repo/epel-7/kir-docker-ploop-epel-7.repo
 
  wget https://copr.fedoraproject.org/coprs/kir/docker-ploop/repo/epel-7/kir-docker-ploop-epel-7.repo
Line 49: Line 95:
 
  yum install ploop docker
 
  yum install ploop docker
  
3. Set ploop driver to be default for docker.
+
Then, set ploop driver to be default for docker. Make sure <code>/etc/sysconfig/docker-storage</code> contains:
 
 
Make sure <code>/etc/sysconfig/docker-storage</code> contains:
 
  
 
  DOCKER_STORAGE_OPTIONS="-s ploop"
 
  DOCKER_STORAGE_OPTIONS="-s ploop"
  
4. Install flannel and kubernetes-node:
+
=== Install flannel and kubernetes-node ===
  
 
  yum -y install flannel kubernetes-node
 
  yum -y install flannel kubernetes-node
 +
 +
=== Configure flannel for master etcd ===
 +
 +
Update the following line in /etc/sysconfig/flanneld to make sure it contains master IP:
 +
 +
FLANNEL_ETCD="http://192.168.122.211:2379"
 +
 +
=== Configure Kubernetes for master node ===
 +
 +
Update the following line in /etc/kubernetes/config to make sure in contains master IP:
 +
 +
KUBE_MASTER="--master=http://192.168.122.211:8080"
 +
 +
=== Configure kubelet ===
 +
 +
In /etc/kubelet/config:
 +
 +
1. Enable it to listen on all interfaces:
 +
KUBELET_ADDRESS="--address=0.0.0.0"
 +
 +
2. Comment out this line to use default hostname:
 +
# KUBELET_HOSTNAME
 +
 +
3. Make sure this points to master node IP:
 +
KUBELET_API_SERVER="--api_servers=http://192.168.122.211:8080"
 +
 +
=== Start needed services ===
 +
 +
Start needed services:
 +
systemctl restart docker
 +
systemctl restart flanneld
 +
systemctl restart kubelet
 +
systemctl restart kube-proxy
 +
 +
NOTE: if 'systemctl restart docker' fails, you might need to run:
 +
systemctl stop docker
 +
ip l del docker0
 +
 +
Enable needed services:
 +
systemctl enable docker
 +
systemctl enable flanneld
 +
systemctl enable kubelet
 +
systemctl enable kube-proxy
 +
 +
== Checking that the system is set up ==
 +
 +
On the master node, check that the needed services are running:
 +
for S in etcd kube-apiserver kube-controller-manager kube-scheduler; do
 +
  systemctl status $S
 +
done
 +
 +
On the nodes, check that the needed services are running:
 +
systemctl status docker
 +
systemctl status flanneld
 +
systemctl status kubelet
 +
systemctl status kube-proxy
 +
 +
Finally, check that the nodes are visible and active:
 +
# kubectl get nodes
 +
NAME        LABELS                              STATUS
 +
kube-node1  kubernetes.io/hostname=kube-node1  Ready
 +
kube-node2  kubernetes.io/hostname=kube-node2  Ready
 +
 +
NOTE: if there are some stale nodes listed, you can remove those:
 +
kubectl delete node localhost.localdomain
  
 
== See also ==
 
== See also ==
  
 
* [https://github.com/coreos/etcd/blob/master/Documentation/configuration.md etcd configuration]
 
* [https://github.com/coreos/etcd/blob/master/Documentation/configuration.md etcd configuration]

Latest revision as of 21:58, 14 September 2015

This article describes a way to deploy a Kubernetes cluster on a few CentOS 7 machines, using Docker with ploop graphdriver.

Prerequisites[edit]

Every system should have:

  • CentOS 7 minimal installed
  • firewalld stopped
  • ntpd installed and running

For Docker to work with ploop (on the nodes), you also need:

  • ext4 filesystem on /var/lib
  • vzkernel installed and booted into
  • ploop installed
  • docker with ploop graphdriver installed

CentOS 7 installation[edit]

1. Select "disk", "manual setup", "standard partitioning", "automatically create partitions", then change xfs to ext4 for / and /boot.

2. After reboot, login and edit /etc/sysconfig/network-scripts/ifcfg-eth0 file, making sure it has the following line:

ONBOOT=yes

3. Enable networking:

ifup eth0

4. Update the system:

yum update -y

5. Disable firewalld:

systemctl stop firewalld; systemctl disable firewalld

6. Install and enable ntpd:

yum -y install ntp && systemctl start ntpd && systemctl enable ntpd


Master installation[edit]

To install a master node, you need to do the following:

Install etcd and kubernetes-master[edit]

yum -y install etcd kubernetes-master

Configure etcd[edit]

Make sure /etc/etcd/etcd.conf contains this line:

ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"

Configure Kubernetes API server[edit]

Make sure /etc/kubernetes/apiserver contains this:

Start master node services[edit]

for S in etcd kube-apiserver kube-controller-manager kube-scheduler; do 
  systemctl restart $S
  systemctl enable $S
done

Set nodes (minions)[edit]

Set the node hostname[edit]

For example:

echo kube-node2 > /etc/hostname

Install vzkernel[edit]

This is needed for ploop to work. If you don't need ploop, you can skip this step.

First, install vzkernel:

rpm -ihv https://download.openvz.org/virtuozzo/factory/x86_64/os/Packages/v/vzkernel-3.10.0-229.7.2.vz7.6.9.x86_64.rpm

Or, use the latest vzkernel from https://download.openvz.org/virtuozzo/factory/x86_64/os/Packages/v/

Then reboot into vzkernel:

reboot

Finally, make sure vzkernel is running:

uname -r
3.10.0-229.7.2.vz7.6.9

Install docker with ploop graphdriver[edit]

This is needed for ploop to work. If you don't need ploop, you can skip this step.

First, install docker and ploop:

yum install -y wget
cd /etc/yum.repos.d/
wget https://copr.fedoraproject.org/coprs/kir/docker-ploop/repo/epel-7/kir-docker-ploop-epel-7.repo
echo "priority=60" >> kir-docker-ploop-epel-7.repo
yum install ploop docker

Then, set ploop driver to be default for docker. Make sure /etc/sysconfig/docker-storage contains:

DOCKER_STORAGE_OPTIONS="-s ploop"

Install flannel and kubernetes-node[edit]

yum -y install flannel kubernetes-node

Configure flannel for master etcd[edit]

Update the following line in /etc/sysconfig/flanneld to make sure it contains master IP:

FLANNEL_ETCD="http://192.168.122.211:2379"

Configure Kubernetes for master node[edit]

Update the following line in /etc/kubernetes/config to make sure in contains master IP:

KUBE_MASTER="--master=http://192.168.122.211:8080"

Configure kubelet[edit]

In /etc/kubelet/config:

1. Enable it to listen on all interfaces:

KUBELET_ADDRESS="--address=0.0.0.0"

2. Comment out this line to use default hostname:

# KUBELET_HOSTNAME

3. Make sure this points to master node IP:

KUBELET_API_SERVER="--api_servers=http://192.168.122.211:8080"

Start needed services[edit]

Start needed services:

systemctl restart docker
systemctl restart flanneld
systemctl restart kubelet
systemctl restart kube-proxy

NOTE: if 'systemctl restart docker' fails, you might need to run:

systemctl stop docker
ip l del docker0

Enable needed services:

systemctl enable docker
systemctl enable flanneld
systemctl enable kubelet
systemctl enable kube-proxy

Checking that the system is set up[edit]

On the master node, check that the needed services are running:

for S in etcd kube-apiserver kube-controller-manager kube-scheduler; do
 systemctl status $S
done

On the nodes, check that the needed services are running:

systemctl status docker
systemctl status flanneld
systemctl status kubelet
systemctl status kube-proxy

Finally, check that the nodes are visible and active:

# kubectl get nodes
NAME         LABELS                              STATUS
kube-node1   kubernetes.io/hostname=kube-node1   Ready
kube-node2   kubernetes.io/hostname=kube-node2   Ready

NOTE: if there are some stale nodes listed, you can remove those:

kubectl delete node localhost.localdomain

See also[edit]