Setting up self-hosted Kubernetes
In this post, I'll go through the process of setting up a Kubernetes cluster with one or more machines.
I'm using Alpine Linux to minimize the actual operating system, and try to allow for as much of the cluster as possible to be running inside of Kubernetes. For the Kubernetes cluster I'm using K3S, as that seemed to be a good out-of-the-box solution that supports both Alpine Linux and allows for clusters with multiple machines. And to connect the machines in the cluster together, I'm using ZeroTier, as that allows for the machines to discover and communicate to each other behind firewalls on different networks.
This article is the first in a series:
- Setting up self-hosted Kubernetes
- Kubernetes HTTPS ingress with ingress-nginx and cert-manager
Installing Alpine Linux
The first step is to install Alpine Linux. I chose to use the "extended" distribution of Alpine. I will not be including specific instructions for installation in this guide.
After installing Alpine, we need to enable the non-default repositories to download some of the
programs we'll need. To do this, we just uncomment and tag the repositories in /etc/apk/repositories
,
like this:
#/media/usb/apks
http://mirror.operationtulip.com/alpine/v3.11/main
@community http://mirror.operationtulip.com/alpine/v3.11/community
@edge http://mirror.operationtulip.com/alpine/edge/main
@edgecommunity http://mirror.operationtulip.com/alpine/edge/community
@testing http://mirror.operationtulip.com/alpine/edge/testing
Install ZeroTier
We will be using ZeroTier to connect together all of the machines in the Kubernetes cluster on a single virtual private network. If you're only intending to run on a single machine, or on machines within the same local network, you can skip this step entirely.
First, we need to enable the tun
kernel module, which will be used to create a virtual network
interface:
# set up tun kernel module
modprobe tun
echo "tun" >> /etc/modules-load.d/tun.conf
After that, we can install zerotier-one
and configure it to run at system boot:
# install zerotier-one service
apk add zerotier-one@edgecommunity
rc-update add zerotier-one
rc-service zerotier-one add
To join a network, we create an account at https://my.zerotier.com/ and create a network.
We can run zerotier-cli join
to join the network:
# join network
zerotier-cli join '<NETWORK-ID>'
Then, in the ZeroTier web UI, we need to approve the new machine to have it join the network.
We can run zerotier-cli listnetworks
to show the network status, which we expect to show
OK PRIVATE
if the machine has successfully joined the network:
200 listnetworks <nwid> <name> <mac> <status> <type> <dev> <ZT assigned ips>
200 listnetworks 1234567890abcdef my_network 12:34:56:78:90:ab OK PRIVATE ztbto2cgb6 192.168.123.45/24
We can see that the IP address of this machine on the network is 192.168.123.45
.
Preparing Alpine for K3S
Before we install K3S, we need to install some dependencies:
apk add curl cgroups
We also need to ensure that the cgroups
service is started, and starts on boot automatically,
otherwise we won't be able to run containers:
# add cgroups
rc-update add cgroups
rc-service cgroups start
Installing K3S
After having run all of the above steps on each machine we intend to add to our Kubernetes cluster, we're now ready to actually install K3S and set up our cluster.
Each instance of Kubernetes that is connected to the cluster is called a "node". There will typically be one node per machine that is connected. One of the nodes is called the "master" node, while the rest are called "worker" nodes.
On the master node
On the machine that we intend to be the master node, we download and run the K3S installer like this:
curl -sLf 'https://get.k3s.io/' | \
INSTALL_K3S_EXEC='--disable traefik --disable servicelb' \
K3S_NODE_NAME='master' \
sh -
We give the installer script these options:
INSTALL_K3S_EXEC
: Options that will be given to K3S when starting up. We tell K3S to disable the built-in ingress and load balancer, as we'll be deploying our own in the next guide in this series.K3S_NODE_NAME
: The name of this node.
Once the installation script has succeeded, it should only take a short while before the Kubernetes
cluster is up and running. You can run the kubectl get nodes
command to see if the cluster is ready:
NAME STATUS ROLES AGE VERSION
master Ready master 5m v1.18.3+k3s1
Before we continue on to the worker nodes, we need to note down the following:
- The IP address of this machine on the cluster network. Since we're using ZeroTier, this is the IP
address that appears when we run
zerotier-cli listnetworks
on this machine. - The node token, found in the file
/var/lib/rancher/k3s/server/node-token
. This token is needed by the worker nodes to be able to join the cluster.
Using kubectl
By default, you can only run the kubectl
on the master node to control your Kubernetes cluster.
However, it's often more convenient to use kubectl
on your local machine instead to remotely
control your cluster.
To do this, do the following:
- Install
kubectl
on your local machine. - Copy the file
/etc/rancher/k3s/k3s.yaml
from your master node onto your local machine, and store it at the location~/.kube/config
(if you're running Linux or macOS). - Modify the IP address in the
server
field of the file to point to your master node.
Now you should be able to run kubectl
commands remotely to control your cluster. Test it out by
again running kubectl get nodes
, which should give you the same output as running on the master
node.
On the worker nodes
On each of the worker nodes, we run the same K3S installer, but with a few diferent options:
curl -sLf 'https://get.k3s.io/' | \
K3S_URL='https://<MASTER-IP>:6443' \
K3S_TOKEN='<NODE-TOKEN>' \
K3S_NODE_NAME='worker-1' \
sh -
We give the installer script these options:
K3S_URL
: The URL of the Kubernetes master node.<MASTER-IP>
is the IP address of the master node in the cluster network.K3S_TOKEN
: The node token found on the master node.K3S_NODE_NAME
: The name of this node.
Run kubectl get nodes
on your master node (or your local machine, if you set up remote access as
described above), and you should see your new worker node added to the cluster:
NAME STATUS ROLES AGE VERSION
master Ready master 10m v1.18.3+k3s1
worker-1 Ready <none> 5m v1.18.3+k3s1
What's next?
In the next guide in this series, we'll be setting up a load balancer and ingress that will allow us to host applications that will be exposed outside of our cluster. You can find it here: Kubernetes HTTPS ingress with ingress-nginx and cert-manager.
Updated 2021-03-13: Added link to follow-up post. Fixed typo in installation command for K3S worker nodes. Removed
apk add cgroups
command as it seems to no longer be needed with the latest version of Alpine Linux.