Now that you have some understanding of what Kubernetes is, let’s create a simple cluster. Kubernetes itself is unopinionated, which is why there are so many flavors, with different networking options, different storage plugins and more. Each managed Kubernetes provider will also have something unique to its backing cloud platform. These can be provisioned in just a few clicks.
While you’re welcome to use any specific flavor or cloud platform you like (we’ll cover a few options in separate sections), this tutorial is primarily based on a plain vanilla Kubernetes install. This will give you a much better understanding of the Kubernetes ecosystem, and how the different components and interfaces work.
We’ll be creating a three-node cluster with one control plane node and two nodes for our workloads. We recommend at least 4G of memory for each node, with access to 4 CPU cores. It’s up to you where you want provision these, but we wouldn’t recommend doing it locally on a laptop. Any cloud provider will work just fine, bare metal will work too. As long as the servers can speak to each other, and can reach the Internet, you should be okay.
Preparing the Servers
For this cluster we’ll be using Ubuntu 24.04 Linux, though you’re welcome to use your favorite distribution, provided you can find all the necessary packages. We’ll mostly be following the official Kubernetes installation guide so if you get stuck in any of these steps, feel free to review the original documentation which is much more thorough and explains some edge cases.
The default Ubuntu configuration adds some swap space for us, so our first step is to remove that, given that Kubernetes doesn’t work (well) with swap. All commands as root or sudo going forward:
$ swapoff -a
$ rm /swap.img
For this change to persist across reboots, we’ll also need to remove the swap line in /etc/fstab
.
Next, for networking between pods, containers and everything else in our cluster, we’ll need to enable IP forwarding. Uncomment or add the following line in your /etc/sysctl.conf
file:
net.ipv4.ip_forward = 1
Apply the changes to the running server with:
$ sysctl -p
We’ll also need a container runtime installed on each Kubernetes node. We’ll be using containerd in our cluster, though other options are also supported. Install containerd on each node and generate a default config:
$ apt update && apt install -y containerd
$ mkdir /etc/containerd
$ containerd config default > /etc/containerd/config.toml
Some features in Kubernetes require cgroup v2 and it’s recommended to use the systemd cgroup driver. To enable that, open the /etc/containerd/config.toml
file, find SystemdCgroup
and set its value to true
. You can do this as a one-liner with sed
:
$ sed -i -e "s/SystemdCgroup = false/SystemdCgroup = true/g" /etc/containerd/config.toml
Restart containerd after making changes:
$ systemctl restart containerd
Finally, make sure all your nodes have a unique hostname, MAC address and product_uuid. Shouldn’t be a problem if you’ve configured each individual node, but if you’ve used some clone/snapshot feature in your hypervisor, you might need to take additional steps to fix these.
For convenience, I’ve also added the three hostnames to the /etc/hosts
file on each node, as well as the laptop which I will be using to work with the cluster. Mine are in a private network, so I’ve used private IPs, but if you’re provisioning your nodes with a cloud provider, you’ll probably be using public ones instead:
10.0.10.100 k0
10.0.10.101 k1
10.0.10.102 k2
Now that the prep work is done, let’s move on to installing some Kubernetes tools.
Installing kubeadm and kubelet
A Kubernetes kubelet
is a node agent that runs on each server and does a lot of the heavy lifting. The kubeadm
utility allows us to create or join a Kubernetes cluster. We’ll need both these utilities on every node in our cluster.
For Ubuntu (and most Debian-based distributions) you can use apt
to install these:
$ apt install -y apt-transport-https ca-certificates curl gpg
$ curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
$ echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /' > /etc/apt/sources.list.d/kubernetes.list
$ apt update && apt install -y kubelet kubeadm
$ apt-mark hold kubelet kubeadm
Again, these should be installed on every node in the cluster. It is recommended to hold these packages with apt-mark hold
or equivalent, so you can be explicit about your cluster updates.
Next, let’s bootstrap our control plane.
Bootstrapping the Cluster
Pick one of the nodes to be your control plane node. This is where the control plane containers will reside (Kubernetes API server, etcd and others) and most other workloads will scheduled across the two remaining nodes. On the control plane node bootstrap your cluster using:
$ kubeadm init
You’ll see a lot of things happening here and it might take a few minutes to complete. There are some preflight checks that kubeadm
runs and may abort the installation. The error messages are usually descriptive enough to understand what needs to be fixed (if swap is detected for example).
Note that if you have multiple network interfaces and/or different networks your node is connected to, you might need to customize the IP addresses that the Kubernetes uses for its API server, pods and services, as well as the address Kubelet binds to.
This should also be done if you’re using any parts of the 10.0.0.0/8 network, since Kubernets (and many of its networking addons) will use this by default, potentially conflicting with your existing networks.
The network CIDRs and API server address can be assigned with a few options to kubeadm
. Here’s an example we’re using for our cluster:
$ kubeadm init \
--pod-network-cidr=10.10.0.0/16 \
--service-cidr=10.100.0.0/16 \
--apiserver-advertise-address=10.0.10.100
Where 10.0.10.100
is the desired (and reachable) IP address of the node, 10.10.0.0/16
is the space where Kubernetes will try and assign pod IPs from, and 10.100.0.0/16
is the space for services, including core Kubernetes ones, like DNS.
If you’ve made a mistake with kubeadm init
don’t worry, you can easily kubeadm reset
and try again!
To configure a node IP address for Kubelet, after kubeadm
does its thing, open /var/lib/kubelet/kubeadm-flags.env
and add --node-ip=10.0.10.100
to the list of args and then restart Kubelet with:
$ systemctl restart kubelet
After a successful init, you’ll be presented with a kubeadm join
command for your other nodes, as well as a configuration file for kubectl
to access the cluster. Use the kubeadm join
command on the two other nodes in your cluster.
Accessing the Cluster
Next, install kubectl on the computer which you’ll use to access the cluster externally. We’ll call this the management host going forward. Kubectl is available for Linux, Windows and macOS. It’s the command line utility you’ll be using to interact with your Kubernetes cluster the most.
After installing kubectl
copy the contents of the configuration file generated on your control plane node (/etc/kubernetes/admin.conf
) into your local configuration file, typically under ~/.kube/config
(there are other options too).
Check your cluster nodes with:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k0 NotReady control-plane 69m v1.30.2
k1 NotReady <none> 2m39s v1.30.2
k2 NotReady <none> 29s v1.30.2
You should see a list of the three nodes you’ve added to the cluster. If you’re using more than one network interface, use kubectl get nodes -o wide
to get more details and confirm the correct IP address is displayed for each node.
You’ll also note that the status of each node is NotReady
. That’s because we have not configured networking in our cluster. Let’s do that next.
Adding a Networking Plugin
Kubernetes is unopinionated about networking too, which is why there is no networking installed by default in a vanilla Kubernetes cluster, and a dozen or so networking plugins/addons.
We’ll be using one called Cilium and these installation instructions. Once you’ve installed the Cilium CLI you can use the utility to install Cilium into your Kubernetes cluster.
Note, that by default, Cilium will use the 10.0.0.0/8 network space, and if you already have some in that network, you may run into some trouble. Our cluster configuration uses both 10.0.10.0/24 and 10.0.2.0/24 for node IPs, so we’ll need to tell Cilium to use a different CIDR to avoid the overlap.
The Cilium CLI will also need to know the configuration of your Kubernetes cluster from the KUBECONFIG
env var to speak to your cluster. You can set this to the generated /etc/kubernetes/admin.conf
when bootstrapping the cluster on the control plane, before running cilium install
:
$ export KUBECONFIG=/etc/kubernetes/admin.conf
$ cilium install \
--set ipam.operator.clusterPoolIPv4PodCIDRList=10.10.0.0/16
Note that if you don’t run anything on the 10.0.0.0/8 network, and are using default Kubernetes network configurations, you can omit the extra --set ...
argument.
It’ll take a few minutes to setup the networking plugin. Once successful, you should see some new pods running, and an overall OK status for Cilium:
$ cilium status
/¯¯\
/¯¯\__/¯¯\ Cilium: OK
\__/¯¯\__/ Operator: OK
/¯¯\__/¯¯\ Envoy DaemonSet: disabled (using embedded mode)
\__/¯¯\__/ Hubble Relay: disabled
\__/ ClusterMesh: disabled
Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1
DaemonSet cilium Desired: 3, Ready: 3/3, Available: 3/3
Containers: cilium Running: 3
cilium-operator Running: 1
Let’s take a look at our nodes from outside the cluster via kubectl
:
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k0 Ready control-plane 9m29s v1.30.2 10.0.10.100 <none> Ubuntu 24.04 LTS 6.8.0-35-generic containerd://1.7.12
k1 Ready <none> 8m50s v1.30.2 10.0.10.101 <none> Ubuntu 24.04 LTS 6.8.0-35-generic containerd://1.7.12
k2 Ready <none> 5m40s v1.30.2 10.0.10.102 <none> Ubuntu 24.04 LTS 6.8.0-35-generic containerd://1.7.12
If everything is going according to plan, the node statuses should now display Ready. The IP addresses should be correct, and you should see a bunch of pods running in the kube-system
namespace as well, with correct IP addresses too:
$ kubectl -n kube-system get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
cilium-9qdqk 1/1 Running 0 17m 10.0.10.101 k1 <none> <none>
cilium-f6mvg 1/1 Running 0 16m 10.0.10.102 k2 <none> <none>
cilium-operator-6df6cdb59b-8pq8f 1/1 Running 1 (58s ago) 17m 10.0.10.101 k1 <none> <none>
cilium-zvncw 1/1 Running 1 (61s ago) 17m 10.0.10.100 k0 <none> <none>
coredns-7db6d8ff4d-t24p7 1/1 Running 1 (61s ago) 19m 10.10.0.116 k0 <none> <none>
coredns-7db6d8ff4d-zm665 1/1 Running 1 (61s ago) 19m 10.10.0.82 k0 <none> <none>
etcd-k0 1/1 Running 5 (61s ago) 20m 10.0.10.100 k0 <none> <none>
kube-apiserver-k0 1/1 Running 1 (61s ago) 19m 10.0.10.100 k0 <none> <none>
kube-controller-manager-k0 1/1 Running 1 (61s ago) 19m 10.0.10.100 k0 <none> <none>
kube-proxy-ktgdj 1/1 Running 0 16m 10.0.10.102 k2 <none> <none>
kube-proxy-pksq2 1/1 Running 0 19m 10.0.10.101 k1 <none> <none>
kube-proxy-rj2gz 1/1 Running 1 (61s ago) 19m 10.0.10.100 k0 <none> <none>
kube-scheduler-k0 1/1 Running 7 (61s ago) 8m35s 10.0.10.100 k0 <none> <none>
Finally, core service should also be running on correct IP addresses:
$ kubectl get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 21m
kube-system hubble-peer ClusterIP 10.100.38.57 <none> 443/TCP 18m
kube-system kube-dns ClusterIP 10.100.0.10 <none> 53/UDP,53/TCP,9153/TCP 21m
Recap
If you made it thus far, congratulations! Don’t worry if it took you a few retries and a dozen reboots along the way.
In this section we covered getting a Kubernetes cluster up and running using the official kubeadm
utility. We prepared our systems and installed all the necessary software on all nodes. We bootstrapped the cluster on our first node, got hold of the main configuration file for kubectl
as well as a join token for other nodes. We joined two additional nodes into the cluster using the join token.
We configured a kubectl
to access the Kubernetes API server from outside the cluster. We then added a networking plugin called Cilium using its CLI utility. Finally, we looked at our nodes, pods and services, to make sure everything is running and assigned correct IP addresses.
Now that we have a cluster up and running, let’s use it to run some WordPress applications, shall we? Head over to the next section to run your first WordPress pod.