So far we’ve been working with Kubernetes Pods directly, which has some limitations. For example, it’s not possible to change an environment variable for any of the containers in an existing pod, you’ll have to delete it and create a new one. If your Pod crashes, there’s nothing to tell Kubernetes to start a new one. To help with these, and many other challenges, Kubernetes has several abstractions over Pods.
Note: all code samples from this section are available on GitHub.
A Deployment in Kubernetes allows you to provide a template specification for a Pod you’d like to be running in your cluster. The deployment specification can optionally include the number of replicas and other options, which we’ll talk a lot more about when we look at scaling.
Adding Nginx
To demonstrate the use of Deployments and to decouple our HTTP and PHP services for WordPress, we’ll be adding the Nginx web server to our cluster. We’ll also use the php-fpm
version of our WordPress container image. We’ll adjust our service accordingly, so that our HTTP traffic will flow to Nginx, and when PHP is needed, it will be proxied to a WordPress container over FastCGI.
Our persistent volumes and claims will remain intact, so the storage-class.yml
, volumes.yml
and volume-claims.yml
manifests can be re-used from the previous section. Let’s provision the necessary volumes on our k1
node:
$ ssh k1
$ sudo mkdir -p /data/volumes/wordpress/www-data
$ sudo mkdir -p /data/volumes/wordpress/mariadb
And create our StorageClass
, PersistentVolume
and PersistentVolumeClaim
components:
$ kubectl apply -f storage-class.yml -f volumes.yml -f volume-claims.yml
storageclass.storage.k8s.io/local-storage created
persistentvolume/www-data created
persistentvolume/mariadb created
persistentvolumeclaim/www-data created
persistentvolumeclaim/mariadb created
Now let’s create a new nginx.deployment.yml
file for our new Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.27
ports:
- containerPort: 80
volumeMounts:
- name: www-data
mountPath: /var/www/html
readOnly: true
volumes:
- name: www-data
persistentVolumeClaim:
claimName: www-data
Quite a few things to cover here. Note that the outer spec
and metadata
attributes are the specification and metadata of our Deployment
object, while the inner spec
and metadata
are for the Pods
that our Deployment is going to manage.
The selector
attribute tells Kubernetes that this Deployment will manage all pods which have a label with the key app
and the value nginx
. The template
then defines how this deployment is going to create our pods: label them with app=nginx
, and assign that specific containers
and volumes
attributes. These are very much like our previous examples, except we’re using the nginx:1.27
container image here. We’re also mounting the volume as readOnly
since Nginx will have no reason to write to our www-data
volume.
We’ll come back to this deployment in a bit (we’ll need to tell it how to reach our PHP container), but you can deploy it for now and make sure it works:
$ kubectl apply -f nginx.deployment.yml
deployment.apps/nginx created
It may take some time to download and run the Nginx container image. You can then inspect that the deployment is up and running, along with the pod specification we’ve given it:
$ kubectl get deploy -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
nginx 1/1 1 1 22s nginx nginx:1.27 app=nginx
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-5d9cdc46dd-wh9kr 1/1 Running 0 84s
So how exactly is this different from running a Pod directly? Well, let’s try and delete that running pod:
$ kubectl delete nginx-5d9cdc46dd-wh9kr
pod "nginx-5d9cdc46dd-wh9kr" deleted
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-5d9cdc46dd-br89l 0/1 ContainerCreating 0 0s
Once the Deployment realizes its pod is gone, it will automatically schedule a new one, so we no longer have to worry about that. We can also make changes to the Deployment manifest, and once applied, the Deployment controller will take care of replacing all our existing pods with the updated specifications. For example, let’s change the nginx
container version to 1.26
and re-apply our deployment:
$ kubectl apply -f nginx.deployment.yml
deployment.apps/nginx configured
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-5d9cdc46dd-dzql7 1/1 Running 0 3s
nginx-7497c66c4c-6bw6x 1/1 Terminating 0 29s
You can see here that the Deployment created a new Pod for us and is terminating the old one. Let’s add some replicas by adding a replicas
attribute to our manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 3
selector:
# ...
Apply the updated manifest and inspect the pods:
$ kubectl apply -f nginx.deployment.yml
deployment.apps/nginx configured
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-5d9cdc46dd-4trnw 0/1 Pending 0 1s
nginx-5d9cdc46dd-8b2tb 0/1 Pending 0 1s
nginx-5d9cdc46dd-dzql7 1/1 Running 0 2m38s
You’ll see that the Deployment instantly brought up two additional Pods. If we look at the -o wide
version we’ll see that all the Nginx pods have been scheduled on the same Kubernetes node. Can you guess why?
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-5d9cdc46dd-4trnw 1/1 Running 0 13s 10.1.134.102 k1 <none> <none>
nginx-5d9cdc46dd-8b2tb 1/1 Running 0 13s 10.1.134.110 k1 <none> <none>
nginx-5d9cdc46dd-dzql7 1/1 Running 0 2m50s 10.1.134.105 k1 <none> <none>
Typically Kubernetes will try and schedule Pods on different nodes for high availability, but since our Pod is tied to a specific PersistentVolume
available only on k1
, the scheduler doesn’t really have much options. This is not ideal of course, and we’ll look at some solutions to this problem in future sections.
Nginx Service
Now that we’ve added some Nginx Pods to our cluster, we’ll need a way to reach them from outside of our cluster. We’ve already done this in the past for our Apache-based container, this is not going to be much different. Let’s create an nginx.service.yml
file:
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: NodePort
ports:
- port: 80
nodePort: 30007
selector:
app: nginx
We’re creating a service with a NodePort
30007 linked to port 80 of our Nginx pods. Since we now have multiple such pods running, this service will also automatically provide some load balancing for us. Let’s deploy this service to our Kubernetes cluster:
$ kubectl apply -f nginx.service.yml
service/nginx created
Open up your browser to any one of your Kubernetes nodes on port 30007 and you should be able to see the “Welcome to nginx” message. We’ll return back to this deployment to instruct it how to speak to our PHP service.
A WordPress StatefulSet
A StatefulSet in Kubernetes is also an abstraction over Pods and is very similar to a Deployment. However, it is more aware that it is managing a stateful application, providing sticky identities, predictable naming and more. Since our WordPress application in its current state is very much a stateful application (state being stored in our local volume), the StatefulSet
component is a great option to deploy and manage our WordPress container.
Let’s create a wordpress.statefulset.yml
manifest file:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: wordpress
spec:
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
containers:
- name: wordpress
image: wordpress:6.5-fpm
ports:
- containerPort: 9000
volumeMounts:
- name: www-data
mountPath: /var/www/html
volumes:
- name: www-data
persistentVolumeClaim:
claimName: www-data
Very similar to the Nginx deployment above. Three key differences here (in addition to the naming) are the container image
, the containerPort
(9000 is the php-fpm FastCGI port number) and note that we’re no longer using a readOnly
attribute with our volume, since WordPress will need the ability to write to our www-data
volume.
Let’s apply this manifest and make sure our WordPress pod is created successfully:
$ kubectl apply -f wordpress.statefulset.yml
statefulset.apps/wordpress created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-5d9cdc46dd-4trnw 1/1 Running 0 18m
nginx-5d9cdc46dd-8b2tb 1/1 Running 0 18m
nginx-5d9cdc46dd-dzql7 1/1 Running 0 20m
wordpress-0 0/1 ContainerCreating 0 4s
Note that this -fpm
version of the WordPress container does not contain a web server, and hence no longer speaks HTTP. The only language it speaks is FastCGI over port 9000. This means that we’ll have to instruct our Nginx pod to serve static files from its mounted volume, but to proxy any PHP requests to our WordPress service over the FastCGI port.
To do that we’ll need a Kubernetes Service for our FastCGI service, so let’s create a wordpress.service.yml
file:
apiVersion: v1
kind: Service
metadata:
name: wordpress
spec:
type: NodePort
ports:
- port: 9000
selector:
app: wordpress
The biggest difference between this and services we’ve created before in our cluster is the lack of a type: NodePort
. This is because we only need this FastCGI service to be accessible within the Kubernetes cluster and never externally. By not defining a service type, we’re using the default service type, which is ClusterIP
. This service provides a static IP address, reachable from within the cluster, which forwards to one or more endpoints behind the service, thus also providing load balancing.
Let’s create this service in our Kubernetes cluster:
$ kubectl apply -f wordpress.service.yml
service/wordpress created
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 7d12h
nginx NodePort 10.152.183.52 <none> 80:30007/TCP 25m
wordpress ClusterIP 10.152.183.155 <none> 9000/TCP 66s
If you inspect the services you’ll see our new wordpress
service along with its cluster IP. Note that NodePort
services also have a usable static cluster IP, in addition to the port exposed on all nodes.
We don’t really have to memorize this IP address in order to use it, since Kubernetes will automatically provide convenient names that resolve to these IP addresses within the cluster. We can verify that from a running Nginx container, by installing the dnsutils
package and trying to resolve the wordpress
name:
$ kubectl exec -it nginx-5d9cdc46dd-4trnw -- bash
$ apt update && apt install dnsutils
$ host wordpress
wordpress.default.svc.cluster.local has address 10.152.183.155
Let’s now configure our Nginx containers to speak to our shiny new PHP service.
Configuring Nginx
You’ll find that a lot of configuration in Kubernetes is done through ConfigMaps, and this is a great opportunity to use one for our Nginx pods. ConfigMaps are just a key-value store, which allows us to store some arbitrary data, and make it available to pods and containers in our cluster.
Let’s create a nginx.configmap.yml
file for our Nginx configuration:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx.conf.d
data:
wordpress.conf: |
server {
listen 80 default_server;
server_name _;
root /var/www/html;
index index.php index.html;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass wordpress:9000;
}
}
Our ConfigMap
is called nginx.conf.d
which we’ll then address in our Deployment, the data
section contains our key-value pairs, with wordpress.conf
being the only key. As you’ll see later, we’ll mount this ConfigMap
into the /etc/nginx/conf.d
directory where wordpress.conf
will result in a file inside that directory that Nginx will read when starting.
We won’t go into too much detail of the contents of the configuration file. It’s a server
block that listens on port 80 and is set as the default_server
with a bogus server_name
, which causes Nginx to use this block as a fallback for pretty much any request that doesn’t have a better server name match. This allows us to reach this block by IP or any mapped hostname, without additional configuration.
We set the root
to /var/www/html
, this is where we mount our PersistentVolume
which contains all the WordPress files. We try to serve a file if it exists physically on disk with try_files
, and fall back to index.php (this allows for pretty permalinks in WordPress). Finally, we add a location
block for all PHP files, and pass them to our FastCGI service using fastcgi_pass
. Note the address of the target FastCGI server is set to wordpress:9000
and Nginx will successfully resolve that to our ClusterIP.
After creating this configuration map, we’ll need to make some changes to our Nginx deployment to make use of it, so let’s revisit our nginx.deployment.yml
file. First make sure the volumes
section contains our ConfigMap
:
volumes:
- name: www-data
persistentVolumeClaim:
claimName: www-data
- name: nginx-configs
configMap:
name: nginx.conf.d
Then make sure it’s added to our volumeMounts
:
volumeMounts:
- name: www-data
mountPath: /var/www/html
readOnly: true
- name: nginx-configs
mountPath: /etc/nginx/conf.d
readOnly: true
This mount can also be readOnly
as Nginx does not have any reason to update or create new configuration files. ConfigMaps and Secrets are always mounted read-only in recent versions of Kubernetes. After we’re done with these updates, let’s apply our Nginx ConfigMap
and Deployment
manifests:
$ kubectl apply -f nginx.configmap.yml -f nginx.deployment.yml
configmap/nginx.conf.d created
deployment.apps/nginx configured
If we now browse to one of our cluster nodes on port 30007, we’ll no longer see the Nginx welcome message. Instead we’ll see our WordPress install, which will likely prompt for our database credentials. Let’s address that next.
Running a Database
A relational database is almost always a StatefulSet
application. The specification will be very similar to our previously deployed MariaDB container, however now that we have some experience with ConfigMaps, let’s move those environment variables into a new ConfigMap
named mariadb.configmap.yml
:
apiVersion: v1
kind: ConfigMap
metadata:
name: mariadb
data:
database: wordpress
username: wordpress
password: secret
This should not require any explanation, and don’t worry, we’ll look at storing the password in a proper Kubernetes Secret
in later sections. Now, let’s create our mariadb.statefulset.yml
manifest for our MariaDB container:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mariadb
spec:
selector:
matchLabels:
app: mariadb
template:
metadata:
labels:
app: mariadb
spec:
containers:
- name: mariadb
image: mariadb:10.11
ports:
- containerPort: 3306
volumeMounts:
- name: mariadb
mountPath: /var/lib/mysql
env:
- name: MARIADB_DATABASE
valueFrom:
configMapKeyRef:
name: mariadb
key: database
- name: MARIADB_USER
valueFrom:
configMapKeyRef:
name: mariadb
key: username
- name: MARIADB_PASSWORD
valueFrom:
configMapKeyRef:
name: mariadb
key: password
- name: MARIADB_RANDOM_ROOT_PASSWORD
value: "true"
volumes:
- name: mariadb
persistentVolumeClaim:
claimName: mariadb
Most of this should be very familiar by now. We’re creating a new StatefulSet
which will be managing our mariadb
pods. We’re specifying the 3306 containerPort
, adding our mariadb
persistent volume claim to the /var/lib/mysql
mount-point. Notice how we’ve replaced value
with valueFrom
blocks in the env
section. This instructs Kubernetes to read the values from the ConfigMap
that we defined earlier.
Let’s now add our MariaDB ConfigMap
and StatefulSet
to the Kubernetes cluster:
$ kubectl apply -f mariadb.configmap.yml -f mariadb.statefulset.yml
configmap/mariadb created
statefulset.apps/mariadb created
You should now be able to see the MariaDB pod running in the cluster:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mariadb-0 1/1 Running 0 10s
nginx-bb84f6f76-jsw24 1/1 Running 0 31m
nginx-bb84f6f76-nltrp 1/1 Running 0 31m
nginx-bb84f6f76-phzth 1/1 Running 0 31m
wordpress-0 1/1 Running 0 68m
Similar to how we made our PHP service available to Nginx, we’ll need to make our MariaDB service available to PHP, with a mariadb.service.yml
manifest file:
apiVersion: v1
kind: Service
metadata:
name: mariadb
spec:
ports:
- port: 3306
selector:
app: mariadb
Similar to our PHP service, this is another ClusterIP
service that’s going to be available internally, under the name mariadb
and port 3306
. Let’s deploy this manifest to our Kubernetes cluster:
$ kubectl apply -f mariadb.service.yml
service/mariadb created
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 7d13h
mariadb ClusterIP 10.152.183.65 <none> 3306/TCP 11s
nginx NodePort 10.152.183.52 <none> 80:30007/TCP 84m
wordpress ClusterIP 10.152.183.155 <none> 9000/TCP 59m
We’re getting close now. Let’s tell WordPress about our new MariaDB database endpoint and credentials.
Connect WordPress to MariaDB
You probably already have a pretty good idea of what needs to happen next. We’ll update our wordpress.statefulset.yml
file to include our database address and credentials from our ConfigMap
:
containers:
- name: wordpress
image: wordpress:6.5-fpm
# ...
env:
- name: WORDPRESS_DB_HOST
value: mariadb
- name: WORDPRESS_DB_USER
valueFrom:
configMapKeyRef:
name: mariadb
key: username
- name: WORDPRESS_DB_NAME
valueFrom:
configMapKeyRef:
name: mariadb
key: database
- name: WORDPRESS_DB_PASSWORD
valueFrom:
configMapKeyRef:
name: mariadb
key: password
Note that the WORDPRESS_DB_HOST
is no longer set to 127.0.0.1
as we are no longer running MariaDB in the same Pod as the WordPress application. We now need to use the service name mariadb
which will resolve to the ClusterIP address of the MariaDB service.
Save the updated StatefulSet
to the Kubernetes cluster:
$ kubectl apply -f wordpress.statefulset.yml
statefulset.apps/wordpress configured
Now navigate your browser to a node endpoint on port 30007 (http://k0:30007 in our example), and run through the WordPress installation process, which will no longer ask for database credentials as we’ve defined them via environment variables.
Recap & Cleanup
Phew! We’ve covered a lot here! We’ve successfully decoupled our single-pod WordPress application into three different and individual components: our web service Deployment running Nginx, our database service running MariaDB in a StatefulSet, and our WordPress StatefulSet application running php-fpm.
We’ve discovered a way to store values and even entire configuration files, and share them with our Pods through the use of ConfigMaps
. We took a closer look at NodePort
services vs. ClusterIP
services. We even scaled our Nginx deployment to three pods!
Once you’re done playing around with your WordPress application, you can remove it from the cluster using kubectl -f manifest.yml
for each manifest file. Alternatively you can do it for all manifest files in your current directory in one go:
$ kubectl delete -f .
configmap "mariadb" deleted
service "mariadb" deleted
statefulset.apps "mariadb" deleted
configmap "nginx.conf.d" deleted
deployment.apps "nginx" deleted
service "nginx" deleted
storageclass.storage.k8s.io "local-storage" deleted
persistentvolumeclaim "www-data" deleted
persistentvolumeclaim "mariadb" deleted
persistentvolume "www-data" deleted
persistentvolume "mariadb" deleted
service "wordpress" deleted
statefulset.apps "wordpress" deleted
If you’d like to nuke the persistent volumes data too, you’ll need to do so via SSH directly on the node where they were provisioned:
$ ssh k1
$ cd /data/volumes
$ sudo rm -rf www-data mariadb
See you in the next section!