Creating High Available Baremetal Kubernetes cluster with Kubeadm and Keepalived (More Simple Guide)

Andrei Kvapil
2 min readDec 19, 2018

This guide is updated version of my previous article Creating High Available Baremetal Kubernetes cluster with Kubeadm and Keepalived (Simple Guide)
Since v1.13 deployment has become much easier and more logical. Note that this article is my personal interpretation of official Creating Highly Available Clusters with kubeadm for Stacked control plane nodes plus few more steps for Keepalived.

If you have any questions, or something is not clear, please refer to the official documentation or ask the Google. All steps described here in the short and simple form.

Input data

We have 3-nodes cluster:

  • node1 (10.9.8.11)
  • node2 (10.9.8.12)
  • node3 (10.9.8.13)

We will create one fault-resistant cluster-ip for them:

  • 10.9.8.10

Then install etcd and kubernetes cluster on them.

LoadBalancer setup

First, we need to install keepalived on all three nodes:

apt-get -y install keepalived

Then write config /etc/keepalived/keepalived.conf:

vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 1
priority 100
advert_int 1
nopreempt
authentication {
auth_type AH
auth_pass iech6peeBu6Thoo8xaih
}
virtual_ipaddress {
10.9.8.10
}
}

Enable and start keepalived on all thee nodes

systemctl start keepalived
systemctl enable keepalived

Now we can check that one of node have 10.9.8.10 address on eth0 interface.

Deploying Kubernetes cluster

Make sure that you have latest kubernetes packages installed on all nodes:

apt-get -y install kubeadm kubelet kubectl

Also stop keepalived daemon on all nodes except the first one:

systemctl stop keepalived

First node

(make sure that you have loadbalanser ip targeted to this node)

Now we will write kubeadm-config.yaml:

apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: stable
apiServer:
certSANs:
- "10.9.8.10"
networking:
podSubnet: 192.168.0.0/16
controlPlaneEndpoint: "10.9.8.10:6443"

Init cluster on first node:

kubeadm init --config=kubeadm-config.yaml

Copy generated certs and kubeadm configs to other control plane nodes

NODES="node2 node3"
CERTS=$(find /etc/kubernetes/pki/ -maxdepth 1 -name '*ca.*' -o -name '*sa.*')
ETCD_CERTS=$(find /etc/kubernetes/pki/etcd/ -maxdepth 1 -name '*ca.*')
for NODE in $NODES; do
ssh $NODE mkdir -p /etc/kubernetes/pki/etcd
scp $CERTS $NODE:/etc/kubernetes/pki/
scp $ETCD_CERTS $NODE:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/admin.conf $NODE:/etc/kubernetes
done

Second and Third nodes

Start kubeadm join on this node using the join command that was previously given to you by kubeadm init on the first node. It should look something like this:

kubeadm join 10.9.8.10:6443 --token j04n3m.octy8zely83cy2ts --discovery-token-ca-cert-hash sha256:84938d2a22203a8e56a787ec0c6ddad7bc7dbd52ebabc62fd5f4dbea72b14d1f --experimental-control-plane

Notice the addition of the --experimental-control-plane flag. This flag automates joining this control plane node to the cluster.

And start keepalived daemon also:

systemctl start keepalived

UPD: You can make this setup more redundant by simple adding haproxy to this, read more about this in my next article.

--

--