CKA Prep - Workloads and Scheduling
This post is part of a series which contains my study notes for the Certified Kubernetes Administrator (CKA) exam.
Note: Unless specifically indicated, text and examples in this post all come directly from the official Kubernetes documentation. I attempted to locate and extract the relevant portions of the kubernetes.io documentation that applied to the exam objective. However, I encourage you to do your own reading. I cannot guarantee that I got all of the important sections.
Workloads and Scheduling
The Exam Curriculum breaks down the second exam topic into the following objectives:
- Understand how resource limits can affect Pod scheduling
- Understand the primitives used to create robust, self-healing, application deployments
- Understand deployments and how to perform rolling update and rollbacks
- Know how to scale applications
- Use ConfigMaps and Secrets to configure applications
- Awareness of manifest management and common templating tools
Understand How Resource Limits can Affect Pod Scheduling
Relevant search terms for Kubernetes Documentation: Resource requirements, requests and limits, limitrange, resourcequota
Kubernetes Documentation Links
- Resource Management for Pods and Containers
- Assign Memory Resources to Containers and Pods
- Assign CPU Resources to Containers and Pods
- Configure Default Memory Requests and Limits for a Namespace
- Configure Default CPU Requests and Limits for a Namespace
- Configure Minimum and Maximum Memory Constraints for a Namespace
- Configure Minimum and Maximum CPU Constraints for a Namespace
- Configure Memory and CPU Quotas for a Namespace
- Configure a Pod Quota for a Namespace
- Limit Ranges
- Resource Quotas
API Objects
- ResourceQuota - a policy which constrains the total resource allocation for all of the resources in a given namespace.
- LimitRange - a policy which constrains resource allocations for individual resources in a given namespace.
Concepts
-
When defining the container spec for a pod, you can specify resource requests and limits:
- “When you specify the resource request for containers in a Pod, the kube-scheduler uses this information to decide which node to place the Pod on.”
- “When you specify a resource limit for a container, the kubelet enforces those limits so that the running container is not allowed to use more of that resource than the limit you set.
- “The kubelet also reserves at least the request amount of that system resource specifically for that container to use.”
- “If you specify a limit for a resource, but do not specify any request, and no admission-time mechanism has applied a default request for that resource, then Kubernetes copies the limit you specified and uses it as the requested value for the resource.”
- “When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for Pods. The scheduler ensures that, for each resource type, the sum of the resource requests of the scheduled containers is less than the capacity of the node.”
- If the resource requests of a Pod exceed available resource capacity on all nodes then the Pod will not be scheduled.
- “Note that although actual memory or CPU resource usage on nodes is very low, the scheduler still refuses to place a Pod on a node if the capacity check fails. This protects against a resource shortage on a node when resource usage later increases, for example, during a daily peak in request rate.”
-
There are two types of policies that can be defined for a given namespace to manage resource usage.
- A Limit Range policy controls resource utilization for the pods and persistent volume claims created in a given namespace.
- Pod - set minimum and maximum CPU and memory limits. Requests to create pods which do not reside within the defined limits will be rejected.
- Container - Define default CPU and memory requests and limits. If a container spec does not have CPU or memory requests and limits defined then the policy will inject the default requests and limits defined in the Limit Range into the Pod spec.
- Container - set minimum and maximum CPU and memory limits
- Persistent Volume Claim - set minimum and maximum storage. Requests to create persistent volume claims which do not reside within the defined limits will be rejected.
- “A Resource Quota, defined by a
ResourceQuota
object, provides constraints that limit aggregate resource consumption per namespace. It can limit the quantity of objects that can be created in a namespace by type, as well as the total amount of compute resources that may be consumed by resources in that namespace.”
- A Limit Range policy controls resource utilization for the pods and persistent volume claims created in a given namespace.
Sample YAML Files
-
Sample Pod definition YAML file with requests and limits specified:
apiVersion: v1 kind: Pod metadata: name: frontend spec: containers: - name: app image: nginx resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m"
-
Sample Limit Range Policy YAML file
apiVersion: "v1" kind: "LimitRange" metadata: name: "accounting-limits" spec: limits: - type: "Pod" min: # Hard Limit cpu: "100m" memory: "30Mi" max: # Hard Limit cpu: "2" memory: "2Gi" - type: "Container" defaultRequest: # Default Request cpu: "100m" memory: "150Mi" default: # Default Limit cpu: "350m" memory: "300Mi" min: # Hard Limit cpu: "75m" memory: "4Mi" max: # Hard Limit cpu: "1" memory: "1Gi" maxLimitRequestRatio: cpu: "5" memory: "7" - type: "PersistentVolumeClaim" min: storage: "1Gi" max: storage: "15Gi"
-
Sample Resource Quota Definition YAML File
apiVersion: v1 kind: ResourceQuota metadata: name: accounting-quota spec: hard: pods: "20" requests.cpu: "1" requests.memory: 1Gi requests.ephemeral-storage: 5Gi limits.cpu: "2" limits.memory: 4Gi limits.ephemeral-storage: 10Gi configmaps: "50" count/deployments.apps: "5" persistentvolumeclaims: "10" secrets: "20" services: "5"
Understand the Primitives Used to Create Robust, Self-healing, Application Deployments
Relevant search terms for Kubernetes Documentation: replicaset, deployment, statefulset, daemonset
Kubernetes Documentation Links
API Objects
-
ReplicaSet - “A ReplicaSet’s purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods.” ReplicaSets are created by deployments.
-
Deployment - Adds an additional layer of control for ReplicaSets via rollouts and rollbacks
-
StatefulSet - “Manages the deployment and scaling of a set of pods and provides guarantees about the ordering and uniqueness of these pods.” Pods created by a StatefulSet are all based on the same container spec; however, in contrast with the Deployment, the StatefulSet maintains a distinct identity for each pod.
StatefulSets are valuable for applications that require one or more of the following.
- Stable, unique network identifiers.
- Stable, persistent storage.
- Ordered, graceful deployment and scaling.
- Ordered, automated rolling updates.
In the above, stable is synonymous with persistence across Pod (re)scheduling. If an application doesn’t require any stable identifiers or ordered deployment, deletion, or scaling, you should deploy your application using a workload object that provides a set of stateless replicas. Deployment or ReplicaSet may be better suited to your stateless needs.
-
DaemonSet - Manages scenarios such as logging or monitoring where it is necessary to have one pod running on all nodes or a subset of nodes. As nodes are added to the cluster or removed from the cluster the Daemonset automatically creates a new pod on each new node and removes a pod from each node that is removed.
Understand Deployments and How to Perform Rolling Update and Rollbacks
Relevant search terms for Kubernetes Documentation: deployment
Kubernetes Documentation Links
Concepts
- When a deployment is created, the deployment controller creates a ReplicaSet which creates the corresponding pods.
- A change made to a deployment resource triggers a rollout which is managed by the deployment controller. The controller manages the changes by creating a new ReplicaSet with the changes applied and then adding new pods to it while removing Pods from the old ReplicaSet at a controlled rate. The change creates a new revision of the deployment. A rollout can be paused which enables the operator to make multiple changes to the deployment configuration without triggering additional rollouts.
- If the new deployment is not stable then it is possible to rollback to a previous revision of the deployment.
- Deployments can be scaled up and down.
- The
kubectl rollout
command can be used with deployments, daemonsets, and statefulsets. However, only deployments can be paused and resumed. - To document changes made to objects as annotations use the
--record=true
flag (or simply add the--record
flag).
Relevant Commands
Create a deployment called “my-dep” using the nginx image with 3 replicas running on port 8080 and pass the “date” command
kubectl create deployment my-dep --image=nginx \
--replicas=3 --port=8080 -- date
Generate a file called “deploy.yml” for a deployment called “my-dep” using the nginx image with 3 replicas
kubectl create deployment my-dep --image=nginx \
--replicas=3 --dry-run=client -o yaml > deploy.yml
NOTE: useful in cases where additional changes to the definition are required prior to creating the deployment
Update the image used in a deployment called “nginx-deployment”
kubectl set image deployment/nginx-deployment \
nginx=nginx:1.16.1
Check the status for the rollout of a deployment called nginx
kubectl rollout status deployment/nginx
Show the rollout history for a deployment called “abc”
kubectl rollout history deployment/abc
Show the details of revision 3 of a deployment called “abc”
kubectl rollout history deployment/abc --revision=3
Rollback to the previous revision of a deployment called “abc”
kubectl rollout undo deployment/abc
Rollback to revision 3 of a deployment called “abc”
kubectl rollout undo deployment/abc --to-revision=3
Pause deployment tracking for the deployment called “nginx” so that several changes can be made without triggering the creation of new ReplicaSets.
kubectl rollout pause deployment/nginx
Resume deployment tracking for the deployment called “nginx” and trigger the creation of a new revision and ReplicaSet.
kubectl rollout resume deployment/nginx
Perform a rolling restart for a deployment called “nginx”. Each pod in the deployment is terminated one by one forcing the replicaset to create new replacement pods.
kubectl rollout restart deployment/nginx
Know How to Scale Applications
Relevant search terms for Kubernetes Documentation: scale, horizontalpodautoscaler, averagevalue
Kubernetes Documentation Links
- ReplicaSet
- Deployments
- StatefulSets
- Scale Your App
- Scale a StatefulSet
- Horizontal Pod Autoscaling
- HorizontalPodAutoscaler Walkthrough
API Objects
- HorizontalPodAutoscaler - (shortname hpa) “automatically updates a workload resource (such as a Deployment or StatefulSet), with the aim of automatically scaling the workload to match demand.”
Concepts
- You can change the number of replicas for a replica set, deployment, or stateful set manually by using the
kubectl scale
command or by modifying the manifest usingkubectl edit
. - You can also employ the Kubernetes Horizontal Pod Autoscaler to automatically scale the number of pods up or down to match demand based on specified metrics such as CPU percentage thresholds or memory utilization.
- In order to leverage the Horizontal Pod Autoscaler, it is necessary to do the following:
- Deploy the Metrics Server to the cluster. “The Kubernetes Metrics Server collects resource metrics from the kubelets in your cluster, and exposes those metrics through the Kubernetes API, using an APIService to add new kinds of resource that represent metric readings. "
- Define resource requests for the containers in the Pods that the Horizontal Pod Autoscaler will work with. “Please note that if some of the Pod’s containers do not have the relevant resource request set, CPU utilization for the Pod will not be defined and the autoscaler will not take any action for that metric. "
- If you create a Horizontal Pod Autoscaler using the
kubectl autoscale
command then the only metric you can apply during creation is thecpu-percent
metric. - If you want to introduce an additional metric then you will need to either:
- Create a manifest file for the Horizontal Pod Autoscaler containing multiple metrics and apply it
- Create the HPA using the
kubectl autoscale
command and then edit the resource after creation using thekubectl edit
command - Create the resource and then extract the manifest using
kubectl get hpa my-hpa -o yaml > hpa.yml
, update the manifest to add additional metrics, and then apply the updated manifest.
Relevant Commands
Scaling Deployments, Stateful Sets, or Replica Sets Manually
Scale a Replica Set (alias rs) to 5 replicas
kubectl scale --replicas=5 rs/web
Scale a Deployment called “redis” to five replicas only if it currently has three replicas
kubectl scale --current-replicas=3 --replicas=5 deployment/redis
Scaling Deployments, Statefulsets, or Replica Sets Automatically Using the Horizontal Pod Autoscaler
Step 1: Create a YAML manifest for a deployment
kubectl create deployment my-dep --image=nginx \
--replicas=3 --dry-run=client -o yaml > deploy.yml
Step 2: Update the spec section of the deploy.yml file to add resource requests for cpu and memory which are a necessary prerequisite for the horizontal pod autoscaler
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-dep
spec:
replicas: 3
selector:
matchLabels:
app: my-dep
template:
metadata:
labels:
app: my-dep
spec:
containers:
- image: nginx
name: nginx
resources:
requests:
cpu: "0.5"
memory: "200Mi"
Step 3: After updating the deploy.yml file with the changes to the resources section, create the deployment
kubectl apply -f deploy.yml
Step 4: Create a Horizontal Pod Autoscaler called “my-dep”
kubectl autoscale deployment my-dep --min=2 --max=5 --cpu-percent=80
Step 5: Generate a manifest file for the “my-dep” Horizontal Pod Autoscaler
kubectl get hpa my-dep -o yaml > hpa.yml
Step 6: Modify the Manifest to include an additional metric (memory)
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: my-dep
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-dep
minReplicas: 2
maxReplicas: 5
metrics:
- type: Resource
resource:
name: cpu
target:
averageUtilization: 80
type: Utilization
- type: Resource
resource:
name: memory
target:
type: AverageValue
averageValue: 1Gi
Step 7: Deploy the updated manifest for the Horizontal Pod Autoscaler
kubectl apply -f hpa.yml
Use ConfigMaps and Secrets to Configure Applications
Relevant search terms for Kubernetes Documentation: configmap, secret
Kubernetes Documentation Links
- ConfigMaps
- Configure a Pod to Use a ConfigMap
- Volumes: ConfigMaps
- Secrets
- Managing Secrets using Kubectl
- Distribute Credentials Securely Using Secrets
- Volumes: Secrets
API Objects
- ConfigMap - “an API object used to store non-confidential data in key-value pairs.”
- Secret - “an object that contains a small amount of sensitive data such as a password, a token, or a key. "
Concepts
- Config maps and Secrets store key / value pairs in Kubernetes so that the data is not stored in containers.
- Config map data is stored in plain text. However, the values for Secrets are base64 encoded. Secrets store confidential data like passwords. However, Kubernetes does not encrypt the data by default; it is only base64 encoded.
- Config maps and Secrets can be created by
- Passing literal values as arguments to the
kubectl create
command - Passing a path to a file or directory to the
kubectl create
command - Creating a YAML manifest and then using the
kubectl apply -f filename.yml
command
- Passing literal values as arguments to the
- There are several different ways expose the data stored in a config map or a secret to a container running in a Pod
- All or specific key / value pairs from a config map or a secret can be exposed to a container as environment variables
- After exposing config map or secret value pairs as environment variables, they can be added to Pod Commands
- All or specific key / value pairs from a config map or a secret can be exposed to a container as files via volumes
- A process running in the container can pull the data using the Kubernetes API
- Container registry credentials can also be stored in secrets and pulled by kubectl to pull pod images
Relevant Commands
Creating Config Maps
Create a config map by specifying values.
kubectl create configmap smtp-config \
--from-literal=smtp-server=smtp1.domain.com \
--from-literal=smtp-port=587 \
--from-literal=smtp-protocol=tls
Create a config map from the files in a folder.
kubectl create configmap env-test \
--from-file=env/test-env-values
Creating Secrets
Create a Secret by specifying values.
kubectl create secret generic smtp-cred \
--from-literal=login=smtpuser \
--from-literal=pass=p@ss
Create a Secret from the files in a folder.
kubectl create secret generic auth-test-params \
--from-file=auth/test
Injecting Config Maps Data into Containers as Environment Variables
To expose all of the key / value pairs from a config map in the Pod YAML resource definition as environment variables use envFrom
in the spec
and the configMapRef
and name
to reference the config map.
apiVersion: v1
kind: Pod
metadata:
name: my-web-app
spec:
containers:
- name: web-app
image: nginx
envFrom:
- configMapRef:
name: my-configmap
To expose specific key / value pairs from a config map in the Pod YAML resource definition as environment variables use env
in the spec
and the configMapKeyRef
with the key
and the name
to reference specific keys in the config map. “You can use ConfigMap-defined environment variables in the command
and args
of a container using the $(VAR_NAME)
Kubernetes substitution syntax.”
apiVersion: v1
kind: Pod
metadata:
name: animal-web-app
spec:
containers:
- name: animal-web-app
image: my-animal-web-app
command: [ "/bin/echo", "$(DOG_BREED) $(CAT_BREED)" ]
env:
- name: DOG_BREED
valueFrom:
configMapKeyRef:
name: dog-beagle-configmap
key: breed
- name: CAT_BREED
valueFrom:
configMapKeyRef:
name: cat-persian-configmap
key: breed
Injecting Config Maps Data into Containers as Files
To expose all of the key / value pairs from a config map in the Pod YAML resource definition as files add a volumes
section to the spec
which references the conifgMap
and then add a volumeMounts
section to the container spec with the name of the volume and the mountPath
.
apiVersion: v1
kind: Pod
metadata:
name: my-web-app
spec:
containers:
- name: web-app
image: nginx
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: special-config
To expose specific key / value pairs from a config map in the Pod YAML resource definition as files add a volumes
section to the spec
which references the conifgMap
, add items
, reference the desired keys, and then add a volumeMounts
section to the container spec with the name of the volume and the mountPath
.
apiVersion: v1
kind: Pod
metadata:
name: animal-web-app
spec:
containers:
- name: animal-web-app
image: my-animal-web-app
volumeMounts:
- name: dog-volume
mountPath: /etc/config
volumes:
- name: dog-volume
configMap:
name: dog-beagle-configmap
items:
- key: breed
path: breed
- key: temperment
path: temperment
Injecting Secret Data into Containers as Environment Variables
To expose all of the key / value pairs from a secret in the Pod YAML resource definition as environment variables use envFrom
in the spec
and the secretRef
and name
to reference the secret.
apiVersion: v1
kind: Pod
metadata:
name: my-web-app
spec:
containers:
- name: web-app
image: nginx
envFrom:
- secretRef:
name: my-secret
To expose specific key / value pairs from a secret in the Pod YAML resource definition as environment variables use env
in the spec
and the secretKeyRef
with the key
and the name
to reference specific keys in the secret. “You can use secret-defined environment variables in the command
and args
of a container using the $(VAR_NAME)
Kubernetes substitution syntax”
apiVersion: v1
kind: Pod
metadata:
name: animal-web-app
spec:
containers:
- name: animal-web-app
image: my-animal-web-app
command: [ "/bin/echo", "$(SECRET_DOG) $(SECRET_CAT)" ]
env:
- name: SECRET_DOG
valueFrom:
secretKeyRef:
name: dog-beagle-Secret
key: breed
- name: SECRET_CAT
valueFrom:
secretKeyRef:
name: cat-persian-secret
key: breed
Injecting Secret Data into Containers as Files
To expose all of the key / value pairs from a secret in the Pod YAML resource definition as files add a volumes
section to the spec
which references the Secret
and then add a volumeMounts
section to the container spec with the name of the volume and the mountPath
.
apiVersion: v1
kind: Pod
metadata:
name: my-web-app
spec:
containers:
- name: web-app
image: nginx
volumeMounts:
- name: animal-volume
mountPath: /etc/config
readonly: true
volumes:
- name: animal-volume
secret:
secretName: animal-secret
To expose specific key / value pairs from a secret in the Pod YAML resource definition as files add a volumes
section to the spec
which references the secret
, add items
, reference the desired keys, and then add a volumeMounts
section to the container spec with the name of the volume and the mountPath
.
apiVersion: v1
kind: Pod
metadata:
name: animal-web-app
spec:
containers:
- name: animal-web-app
image: my-animal-web-app
volumeMounts:
- name: animal-secret-volume
mountPath: /etc/config
readOnly: true
volumes:
- name: animal-secret-volume
secret:
secretName: animal-secret
items:
- key: animal-login
path: animal-login
- key: animal-password
path: animal-password
Awareness of Manifest Management and Common Templating Tools
Relevant search terms for Kubernetes Documentation: kustomize, manage kubernetes objects, kubectl commands
Kubernetes Documentation Links
- Manage Kubernetes Objects
- Declarative Management of Kubernetes Objects Using Kustomize
- kubectl reference docs
Concepts
- There are several ways to create kubernetes objects using kubectl
-
Create objects imperatively using the
kubectl create
and the name of the object (e.g., deployment, service, etc.) and including the supported parameters required to create the object. -
Create objects declaratively using the
kubectl apply -f filename.yaml
where the file contains the yaml resource definitions for one or more resources. -
Creating objects declaratively via kustomize using the
kubectl apply -k <kustomization-directory>
. The<kustomization directory>
must have a manifest file calledkustomization.yml
which can include:- A
configMapGenerator
which can reference a file containing the keys and values for a configmap, a file with environment parameters, or the literal key value pairs. - A
secretGenerator
which can reference a file containing the keys and values for a secret, or the literal key value pairs. - A
resources
list which references the yaml manifests for additional resources which will be created by the process. All of the resources listed in thekustomization.yml
must exist in the same folder. - A
generatorOptions
section which can define labels or annotations which should be applied to all config maps and secrets and can also disable the suffix that is automatically applied to generated config maps and secrets with the optiondisableNameSuffixHash: true
. - There are also additional parameters which when included will be applied to all created resources including
namespace
,namePrefix
,nameSuffix
,commonLabels
, andcommonAnnotations
.
- A
-
The names of the config maps or secrets defined in the
kustomization.yml
can be used in the resource files defined inkustomization.yml
file. In this case, the generated YAML will contain the config map or secret resources as well as the other resources which reference them. The references to the config maps or secrets will all be updated in the other resource definitions to match correctly. -
Run the
kubectl kustomize <kustomization folder>
to generate a YAML manifest containing all of the resources that will be created.
-
And that’s a wrap for this topic.