yavarin.tech

You don’t have to be in the IT industry for long to know that redundancy plays a key role in almost every environment. Well a Kubernetese cluster setup is no exception. Some times you are required to create and maintain a K8s cluster on an on-premises environment or over a cloud provider (we are not talking about managed services here). As a responsible engineer you think about HA and redundancy in your design. So you start with adding multiple control plain node also known as master nodes. But how you should setup the cluster? what is the control plain endpoint? No one likes to waste resources so how can you balance the load between your master node? Kubernetes does not provide a way of managing the HA while the option for having multiple nodes as master is possible. The solution is to use a loadbalancer.

yavarin.tech

Here we are going to see how to use Nginx reverse proxy feature to load balance the calls to K8s API server. Let’s start with how you sould configure the Nginx server. First step is to install the Nginx web server on a machine that is going to be the loadbalancer in our environment.

#on debian based machines
sudo apt install nginx

#on redhat based machines
sudo yum install epel-release 
sudo yum install nginx

Then we need to create the required configuration that tells nginx to forward all of the requests point to our cluster to the master nodes. Create a directory under /etc/nginx/ directory. and name it tcpconfig.d .

cd /etc/nginx/
sudo mkdir tcpconfig.d

In here we are going to create the Nginx config file like below. I named it tcp.conf .

stream {
    upstream mycluster {
        server 192.168.1.20:6443;
        server 192.168.1.21:6443;
        server 192.168.1.22:6443;
    }
    server {
        listen 6443;
        listen 443;
        proxy_pass mycluster;
    }
}

This configuration has two parts, The upstream which holds a list of your master nodes IP addresses and ports. The second part is the server configuration which tells the Nginx which ports to listen on and also what to do with the received packets. In this case it proxies all the packets received of port 6443 and 443 to the mycluster upstream. After saving the changes in the file, we need to add the file to the nginx configs. Use the below command

echo 'include /etc/nginx/tcpconfig.d/*;' >> /etc/nginx/nginx.conf

Then you need to restart the Nginx service for changes to take effect.

Now the Kubernetes cluster configs

yavarin.tech

By the end of this section we are going to acheive a setup like shown in the picutre above. All trafic from different clients encluding admins and also workier nodes to API server endpoint is going through the Nginx load balancer.

While initialising a K8s cluster using kubeadm init command, you have two options for detemining the IP address or the name of the API server enpoint. The first or simplest one is --apiserver-advertise-address this switch takes only IP addresses and if not provided it assumes the IP address of the network with default gateway configured on it. We are not going to use this option though since we don’t want only one of the nodes be accessed by clients or othe worker nodes. The other option is to use --control-plane-endpoint swith which you can pass both IP addresses or FQDN names to. This will define an address that might not necessarily be assigned to one of the nodes as an endpoint for API server. So we pass the --control-plane-endpoint=<loadbalancer IP/FQDN> when initializing the cluster and put in the name or IP address of the loadbalancer that we configured. When installing multiple master nodes aiming for loadbalancing and HA we need to also provide a way that all nodes get a valid certificates so that they can be authenticated by clients as valid APi endpoints to the cluster. To do so we need to pass another option as well.

--apiserver-cert-extra-sans=

This option will and extra names to the certificate of the API server of the cluster. You need to provide the name or IP address of your loadbalancer, name and IP addresses of each master nodes. You can also add a wildcard FQDN name if ths suits your needs.

Conclusion and disclaimer!

This is only of of many ways to increase the redundancy and resiliency of your cluster to achieve more uptime. Althogh there are some other aspects worthy of a look but balancing the workload and making master nodes highly available is one of the most important ones.

In this post I summerized many stuff including security configuration on Nginx. Please be advised publishing the endpoint address of your cluster to the Internet is now allways a good idea. You also might want to add some URL checking on the path through the loadbalancer. There many sources taliking about how you can secure Nginx reverse proxy on the Internet.