Demos für various topics of the LFD458 (Kubernetes for Administrators) and LFS459 (Kubernetes for Application Developers) courses by the Linux Foundation.
This Demo showcases the usage of an emptyDir
volume in one ore more containers.
The files in volumes/ are intended to be edited incrementally
to showcase the required configurations:
- Start with the volume-less pod as your base
- Add an emptyDir volume to this base and mount it: volumes/1-emptydir.yaml
- Apply the emptyDir pod to the cluster
- Exec into the pod using
kubectl exec -it emptydir-demo -- sh
and showcase the volume under/scratch
- Add a second container that mounts the emptyDir volume: volumes/2-shared.yaml
- Apply the shared-volume pod to the cluster
- Exec into the first container using
kubectl exec -it shared-volumes-demo -c busybox -- sh
create a file under/scratch
- Exec into the second container using
kubectl exec -it shared-volumes-demo -c otherbox -- sh
and show that file is available under/other
This Demo showcases how to use kubectl create
to create a ConfigMap
and making ConfigMap data available as envirnoment variables and volumes.
To run the demo, showcase the creation of a config file and the kubectl create
command
as done in configmaps/setup.sh.
Then incrementally edit the base-pod from the volumes demo
adding a volume, volumeMount and env
section as demonstrated in
configmaps/configmap-pod.yaml
Then exec into the container using kubectl exec -it configmap-demo -- sh
and show the
mounted config files in /config
and the MY_ENV_VAR
environment variable.
This Demo shows the rolling upgrade behavior of a deployment.
Before the demo apply the demo-deployment using kubectl apply -f rolling-upgrade/deployment.yaml
.
To showcase the upgrade behavior, update the deployment, e.g. by running
kubectl set image deployment deployment-demo busybox=busybox:latest
.
Then showcase the state change of the objects in the cluster using
watch -n 0.1 kubectl get po,rs -l app=deployment-demo
.
Alternatively, you can show the events using e.g. kubectl get events -w
.
There are also two alternative deployment manifests in the rolling-upgrade folder
for showcasing the behavior when only setting maxSurge
or maxUnavailable
.
This demo showcases how a headless Service differs from a normal clusterIp Service.
Apply the manifests in headless-service/ using
kubectl apply -f headless-service
.
Showcase DNS Resolution of the headless service using:
kubectl exec -it client -- dig +search nginx-headless
Compare this with the output of kubectl get endpoints nginx-headless
and the resolution of the clusterIp Service kubernetes
using
kubectl exec -it client -- dig +search kubernetes
.
This Demo showcases various broken pods/deployments with different errors and how to investigate these errors.
Create a namespace and set your current kube-context to it using ./troubleshooting/setup.sh
.
You can now apply the YAML files to create the various broken pods/deployments using kubectl apply -f troubleshooting/
.
The following table explains what is wrong with them and how to showcase the issue:
Resource | Issue | Command for Observing the Error |
---|---|---|
deployment/memory-constraints | No pods appear for the deployment, because the Pod's memory limit of 800Mi is over the LimitRange's maximum of 500Mi | kubectl describe rs -l app=memory-constraints -> events |
pod/busybox | The executed script in the pod's container "crashes" repeatedly. The logs show an "useful error message" | kubetl get pod busybox -> State, kubectl logs busybox -> error message |
pod/hungry | The pod is stuck in Pending state, since its requests of 10 cpu and 5Ti memory don't fit on the lab nodes. |
kubetl describe pod hungry -> events, resource.requests |
pod/nginx | The pod's image nginx:1.18.7 doesn't exist, causing it to go into ImagePullBackoff |
kubectl describe pod nginx -> events, image |
pod/sleeper | The pod's nodeSelector test=missing doesn't fit any of the lab nodes, causing it to be stuck in Pending |
kubectl get pod sleeper -o yaml -> nodeSelector, conditions |
pod/root | The container has runAsNonRoot set to true but the image runs as root. |
kubectl describe pod root -> events, securityContext |
pod/writer | The container has readOnlyRootFilesystem set to true , but the script attempts to create a file. |
kubectl logs writer -> error message |
After finishing the demo, reset the state using ./troubleshooting/teardown.sh
.
This demo showcases the behavior of pods with pod anti-affinities.
First, ensure your nodes are labelled with the standard kubernetes topology labels such that all nodes have the same region but different zones, e.g. by running:
kubectl label nodes topology.kubernetes.io/region=europe --all
for node in $(kubectl get nodes -o name); do kubectl label $node topology.kubernetes.io/zone=zone-$RANDOM; done
Then deploy the deployments in the pod-anti-affinities folder using kubectl apply -f pod-anti-affinities/
.
Finally showcase the created pods using kubectl get pods
and for pending pods also show the events.
This will show:
- Two pods of the deployment
preferred-anti-affinity-demo
that are successfully scheduled (since the anti-affinity only preferred the pods being placed into different regions & zones - Two pods of the deployment
required-anti-affinity-demo
of which one is stuck in pending, since the required antiAffinity would require a node in a different region to exist
This demo showcases scheduling behavior when using pod topology spread constraints. It assumes a two-node cluster.
It is split into two parts, both using a maxSkew
of 1
with different settings for whenUnsatisfiable
.
The demo for DoNotSchedule
should be done first:
- Deploy the donotschedule manifest using
kubectl apply -f pod-spread/topology-spread-constraints-donotschedule.yaml
- Show that only one pod of the demo Deployment has been scheduled with the rest being stuck in
Pending
state usingkubectl get pods -l app=topology-spread-constraints-donotschedule-demo -o wide
. The-helper
pod has the purpose of blocking the whole node for the Deployment using anpodAntiAffinity
. - Delete the helper pod using
kubectl delete pod topology-spread-constraints-donotschedule-demo-helper
. This "frees" the node again for scheduling Pods of the demo Deployment - Show that all Pods are now scheduled, and that they are evenly distributed amongst both nodes using
kubectl get po -o wide
Afterwards you can contrast this with the behavior of the ScheduleAnyway
setting:
- Show the difference between the first and second demo e.g. by using
colordiff -y pod-spread/*
. The only difference between the demos are the names, theapp
labels (and selectors) and thewhenUnsatisfiable
setting - Deploy the demo using
kubectl apply -f pod-spread/topology-spread-constraints-scheduleanyway.yaml
- Show that all pods are scheduled successfully, but on the same node (since the other one is blocked by the helper)
kubectl get pods -l app=topology-spread-constraints-scheduleanyway-demo -o wide
- Delete all demo pods and show that in the "good" case (where no node is blocked) pods are distributed equally between
nodes using
kubectl delete po topology-spread-constraints-scheduleanyway-demo-helper; kubectl delete pods -l app=topology-spread-constraints-scheduleanyway-demo
and thenkubectl get pods -l app=topology-spread-constraints-scheduleanyway-demo -o wide
Clean up the demo by running kubectl delete -f pod-spread
.
This demo showcases how you can protect against having too many pods evicted at once during planned node maintenance by defining a PodDisruptionBudget (PDB) for your Pods. It assumes two-node cluster, since one node will be drained during the demo, and having too many nodes would spread the pods too far to show the effect of the PDB. Before the demo, setup the deployment and PDB:
kubectl apply -f pod-disruption-budgets/
After the pods of the deployment have been scheduled, pick a node with a few pods on them, that will be used for the "maintenance"
e.g. by running kubectl get pods -l app=pdb-demo -o wide
and looking at the Node column and insert it into $NODE
in the demo.
To then showcase the behavior of the PDB, set up a watch in one terminal using:
watch -n 0.1 kubectl get po-l app=pdb-demo
In a second terminal, cordon and drain the node using
kubectl cordon $NODE
kubectl drain $NODE --ignore-daemonsets
Then jump back to the first terminal with the watch. You should observe, that only one pod is evicted at a time.
In the second terminal you can see in the output of kubectl drain
, that some errors appear indicating that the demo pods cannot be evicted due to the PDB.
After finishing the demo, make sure to clean up by runnning kubectl uncordon $NODE
and kubectl delete -f pod-disruption-budgets/
.
(Only works when authenticating with a client-certificate)
Extract the certificate from your kubeconfig (assumes only one user to exist):
kubectl config view -o jsonpath='{.users[0].user.client-certificate-data}' --raw | base64 -d > cert.pem
Then use openssl x509 -in cert.pem -noout -text | less
to show the certificate data.
Showcase the Subject where the username and -group are extracted from the Organization (O) and Common Name (CN) field respectively.