It’s spring cleaning time and your K8s deployment might have accumulated some dirt since the last time you scrubbed it. If that's the case, here's how to manage this with labels.
Labels communicate precious bits of information about the ownership of a node, a pod, or other objects - all for faster tracking and operational decision-making.
But many engineers don't know how to use them efficiently.
Without further ado, here are 3 ways you can fine-tune your labeling strategy for 2023 and improve your team’s efficiency.
If labels are a totally new topic for you, check this guidefirst.
1. Group resources for object queries
If you add the same label key-value pair to multiple resources, other people can easily query for all of them.
Suppose you discover that a development environment is unavailable. At this point, you can quickly check the status of all pods, including the label environment:dev.
Here’s an example command:
> kubectl get pods -l 'environment=dev'
NAME READY STATUS RESTARTS AGE
nginx 0/1 CrashLoopBackOff 1 5m
This lets you instantly see the affected pods and resolve the issue much faster.
The alternative? Going through all the resources and picking just the ones in the dev environment.
In a complex scenario with many deployments, finding the right dev pods would take you ages if the team didn’t add the environment:dev label to the resources. You’d have to use a generic kubectl get pods command and then comb through the output using a tool like grep.
2. Perform bulk operations
Imagine that poor engineer who has to remove all staging environments every night to reduce costs.
You can automate this task using labels.
Here’s a command that deletes all objects labeled environment:local, environment:dev or environment:staging:
> kubectl delete deployment,services,statefulsets -l 'environment in (local,dev,staging)'
3. Schedule pods based on node labels
K8s uses labels a lot for scheduling pods to the right nodes. Smart labeling gives more control over the resources you create by making K8s schedule specific deployments on specific nodes.
At this point, there are no nodes with the label critical:true.
Let’s create a pod that has to be scheduled on a node with the label critical:true using a node selector. Here is a pod.yaml configuration file for that:
NAME READY STATUS RESTARTS AGE nginx 0/1 Pending 0 1m
> kubectl get events --field-selector involvedObject.name=nginx
LAST SEEN TYPE REASON OBJECT MESSAGE
46s Warning FailedScheduling pod/nginx 0/1 nodes are available: 1 node(s) didn't match Pod's node affinity/selector. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
The pod can’t get scheduled on any of the nodes since none of them has the required label. So, let’s label a node with the required label: