Taint, Toleration, Label, And Node Selector In Kubernetes
Taint, Toleration, Label, And Node Selector In Kubernetes can be confusing in daily work, worth a post here.
It functions like assigning nodes with default attribute, like a tatoo, this tatoo has its magic effects.
kubectl taint nodes node1 key1=value1:NoSchedule # add taint kubectl taint nodes node1 key1=value1:NoSchedule- # remove taint
it translates to
set attribute for node1 that there will be NoSchedule unless if pod have toleration parameter key1=value1.
It’s defined on pod/deployment to ignore nodes taint tatoo.
apiVersion: v1 kind: Pod metadata: name: nginx labels: env: test spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent tolerations: - key: "example-key" operator: "Exists" # can also be operator: "Equal", followed by value effect: "NoSchedule"
It’s used to mark/tag resources, so that we can do bulk actions against them, such as group display/remove/apply.
kubectl label nodes <your-node-name> disktype=ssd
Node Selector can be confusing with Taint and Node affinity. It’s the simplest way to tell pod which node to use manually, otherwise k8s will select any nodes that fulfill taint/tolerance rule.
apiVersion: v1 kind: Pod metadata: name: nginx labels: env: test spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent nodeSelector: disktype: ssd
Nodename can also be used in a similar way as Node Selector, but it’s like problem in FQDN vs IP based access. In a environmentt where the nodes name keeps changing, using static node names will cause trouble.
apiVersion: v1 kind: Pod metadata: name: nginx spec: nodeName: foo-node # schedule pod to specific node containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent
Node Affinity gives more flexible and expressive to select nodes. You can select nodes by weight, like how BGP works in routing. It comes with
preferredDuringSchedulingIgnoredDuringExecution(soft) key words, the rest are similar as how tolerance works.
apiVersion: v1 kind: Pod metadata: name: with-affinity-anti-affinity spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/os operator: In values: - linux preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: label-1 operator: In values: - key-1 - weight: 50 preference: matchExpressions: - key: label-2 operator: In values: - key-2 containers: - name: with-node-affinity image: k8s.gcr.io/pause:2.0
The node with label
label-2:key-2 will be preferable because it’s highest weight. If nodes have multiple labels, then nodes with highest sum of weights wins.