K8S中的Pod调度之亲和性调度
亲和性调度
亲和性调度是一种比硬性指定节点(使用 nodeName
或 nodeSelector
)更灵活的调度策略,它允许定义一组规则,根据这些规则,调度器会尝试将 Pod 调度到最合适的节点上,但如果找不到完全匹配的节点,它仍然可以调度到其他节点上。
下面是亲和性调度的三种主要类型及其使用场景:
亲和性类型 | 描述 | 调度规则示例 |
---|---|---|
Node Affinity | 定义 Pod 可以调度到哪些节点的规则。 | 基于节点的标签选择节点,如调度到具有特定硬件配置或特定区域的节点。 |
- requiredDuringSchedulingIgnoredDuringExecution | 必须满足所有规则才能调度。 | |
- nodeSelectorTerms | 节点选择列表。 | |
- matchFields | 按节点字段列出的节点选择器要求列表。 | |
- matchExpressions | 按节点标签列出的节点选择器要求列表(推荐)。 | |
- preferredDuringSchedulingIgnoredDuringExecution | 优先调度到满足规则的节点,如果没有,也可以调度到其他节点(软限制)。 | |
- preference | 节点选择器项,与权重相关联。 | |
- weight | 倾向权重,范围1-100。 | |
Pod Affinity | 定义 Pod 应该与哪些已存在的 Pod 调度到相同的拓扑域。 | 适用于需要频繁交互的应用,减少通信延迟。 |
- requiredDuringSchedulingIgnoredDuringExecution | 必须与指定的 Pod 调度到相同的拓扑域。 | |
- preferredDuringSchedulingIgnoredDuringExecution | 优先与指定的 Pod 调度到相同的拓扑域,如果没有,也可以调度到其他拓扑域(软限制)。 | |
Pod Anti-Affinity | 定义 Pod 不应该与哪些已存在的 Pod 调度到相同的拓扑域。 | 确保应用的多个实例分散在不同的拓扑域,提高可用性和容错性。 |
- requiredDuringSchedulingIgnoredDuringExecution | 必须不与指定的 Pod 调度到相同的拓扑域。 | |
- preferredDuringSchedulingIgnoredDuringExecution | 优先不与指定的 Pod 调度到相同的拓扑域,如果没有,也可以调度到其他拓扑域(软限制)。 |
每种亲和性都支持两种模式:
-
RequiredDuringSchedulingIgnoredDuringExecution:在调度时必须满足的规则,如果找不到匹配的节点,Pod 将不会被调度。但如果调度后节点的标签发生变化导致不再匹配,Pod 仍然会保留在该节点上。
-
PreferredDuringSchedulingIgnoredDuringExecution:在调度时优先考虑的规则,但如果找不到匹配的节点,Pod 仍然可以被调度到其他节点。
NodeAffinity(节点亲和性)
-
NodeAffinity 允许你根据节点的标签来指定 Pod 应该或倾向于调度到哪些节点上。
-
NodeAffinity 可选配置:
-
requiredDuringSchedulingIgnoredDuringExecution
硬限制-
nodeSelectorTerms:节点选择列表,必须满足所有指定的规则才可以调度到节点上。
-
matchFields:按节点字段列出的节点选择器要求列表。
-
matchExpressions: 按节点标签列出的节点选择器要求列表,包括:
-
-
key:键
-
values:值
-
-
operator:关系符,支持
Exists
、DoesNotExist
、In
、NotIn
、Gt
、Lt
-
preferredDuringSchedulingIgnoredDuringExecution
软限制 -
preference: 一个节点选择器项,与相应的权重相关联。
-
matchFields:按节点字段列出的节点选择器要求列表。
-
matchExpressions:按节点标签列出的节点选择器要求列表,包括:
-
key:键
-
values:值
-
-
operator:关系符,支持
In
、NotIn
、Exists
、DoesNotExist
、Gt
、Lt
-
weight:倾向权重,在范围1-100。
-
-
硬限制配置
因为没有找到被打上test标签的node所以调度失败
# vim pod-nodeaffinity-required.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: pod-nodeaffinity-required
namespace: test
spec:
containers:
- name: nginx
image: nginx:1.17.1
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: nodeenv
operator: In
values: ["test","xxx"]
[root@k8s-master ~]# kubectl create ns test
namespace/test created
[root@k8s-master ~]# kubectl apply -f pod-nodeaffinity-required.yaml
pod/pod-nodeaffinity-required created
[root@k8s-master ~]# kubectl get pods pod-nodeaffinity-required -n test
NAME READY STATUS RESTARTS AGE
pod-nodeaffinity-required 0/1 Pending 0 22s
[root@k8s-master ~]# kubectl describe pods pod-nodeaffinity-required -n test
Name: pod-nodeaffinity-required
Namespace: test
Priority: 0
Node: <none>
Labels: <none>
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
nginx:
Image: nginx:1.17.1
Port: <none>
Host Port: <none>
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5f6rd (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
kube-api-access-5f6rd:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 34s default-scheduler 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match Pod's node affinity/selector.
Warning FailedScheduling 33s default-scheduler 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match Pod's node affinity/selector.
接下来我们为node打上标签,继续测试
[root@k8s-master ~]# kubectl label nodes k8s-node1 nodeenv=dev
node/k8s-node1 labeled
[root@k8s-master ~]# kubectl label nodes k8s-node2 nodeenv=test
node/k8s-node2 labeled
[root@k8s-master ~]# kubectl delete -f pod-nodeaffinity-required.yaml
pod "pod-nodeaffinity-required" deleted
[root@k8s-master ~]# kubectl apply -f pod-nodeaffinity-required.yaml
pod/pod-nodeaffinity-required created
[root@k8s-master ~]# kubectl describe pods pod-nodeaffinity-required -n test
Name: pod-nodeaffinity-required
Namespace: test
Priority: 0
Node: k8s-node2/192.168.58.233
Start Time: Thu, 16 Jan 2025 04:14:35 -0500
Labels: <none>
Annotations: cni.projectcalico.org/containerID: eb576e210ed0daf158fc97706a7858428fdcbce61d89936cd60323c184bf65d7
cni.projectcalico.org/podIP: 10.244.169.130/32
cni.projectcalico.org/podIPs: 10.244.169.130/32
Status: Running
IP: 10.244.169.130
IPs:
IP: 10.244.169.130
Containers:
nginx:
Container ID: docker://b58aa001a6b25893a091a726ede2ea57d96e6209c11a8c17d269d78087db505e
Image: nginx:1.17.1
Image ID: docker-pullable://nginx@sha256:b4b9b3eee194703fc2fa8afa5b7510c77ae70cfba567af1376a573a967c03dbb
Port: <none>
Host Port: <none>
State: Running
Started: Thu, 16 Jan 2025 04:14:38 -0500
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m5zx9 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-m5zx9:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 11s default-scheduler Successfully assigned test/pod-nodeaffinity-required to k8s-node2
Normal Pulled <invalid> kubelet Container image "nginx:1.17.1" already present on machine
Normal Created <invalid> kubelet Created container nginx
Normal Started <invalid> kubelet Started container nginx
[root@k8s-master ~]# kubectl get pods pod-nodeaffinity-required -n test
NAME READY STATUS RESTARTS AGE
pod-nodeaffinity-required 1/1 Running 0 21s
-
requiredDuringSchedulingIgnoredDuringExecution:这个字段定义了一个强制性的调度规则,即 Pod 必须在满足以下条件的节点上调度,调度过程中会考虑这个规则,但在运行过程中如果节点标签发生变化,这个规则将被忽略。
-
nodeSelectorTerms:这个字段定义了一个或多个节点选择器条件列表。Pod 只有在所有这些条件都满足的情况下才会被调度到节点上。
-
matchExpressions:这个字段定义了一个或多个匹配表达式列表。每个匹配表达式都包含一个键(
key
)、一个操作符(operator
)和一个或多个值(values
)Pod 只有在所有这些匹配表达式都满足的情况下才会被调度到节点上。
在这个配置文件中,matchExpressions
定义了一个条件:
-
key:
nodeenv
,这意味着要匹配的节点标签的键是nodeenv
。 -
operator:
In
,这意味着要匹配的值必须在给定的列表中。 -
values:
["test","dev"]
,这意味着要匹配的节点标签的值必须是test
或dev
。 -
因此,这个 Pod 的亲和性规则要求它必须在标签
nodeenv
的值为test
或dev
的节点上调度。
软限制配置
软限制只会优先调度被打上标签的node,如果没有任然会被调度到其他节点
[root@k8s-master ~]# vim pod-nodeaffinity-preferred.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: pod-nodeaffinity-preferred
namespace: test
spec:
containers:
- name: nginx
image: nginx:1.17.1
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: nodeenv
operator: In
values: ["xxx","yyy"]
[root@k8s-master ~]# kubectl apply -f pod-nodeaffinity-preferred.yaml
pod/pod-nodeaffinity-preferred created
[root@k8s-master ~]# kubectl get pods pod-nodeaffinity-preferred -n test
NAME READY STATUS RESTARTS AGE
pod-nodeaffinity-preferred 0/1 ContainerCreating 0 32s
[root@k8s-master ~]# kubectl get pods pod-nodeaffinity-preferred -n test -w
NAME READY STATUS RESTARTS AGE
pod-nodeaffinity-preferred 0/1 ContainerCreating 0 36s
pod-nodeaffinity-preferred 1/1 Running 0 37s
[root@k8s-master ~]# kubectl describe pods pod-nodeaffinity-preferred -n test
Name: pod-nodeaffinity-preferred
Namespace: test
Priority: 0
Node: k8s-node1/192.168.58.232
Start Time: Thu, 16 Jan 2025 04:28:24 -0500
Labels: <none>
Annotations: cni.projectcalico.org/containerID: eab55d3f2b78987484123e4f4b21434f4f1323620026e3946e5fe77476e4a761
cni.projectcalico.org/podIP: 10.244.36.71/32
cni.projectcalico.org/podIPs: 10.244.36.71/32
Status: Running
IP: 10.244.36.71
IPs:
IP: 10.244.36.71
Containers:
nginx:
Container ID: docker://56be94e1afb802e91e86faf21ccce1925fa7f4204b418e6c5b8ac11024f75fc2
Image: nginx:1.17.1
Image ID: docker-pullable://nginx@sha256:b4b9b3eee194703fc2fa8afa5b7510c77ae70cfba567af1376a573a967c03dbb
Port: <none>
Host Port: <none>
State: Running
Started: Thu, 16 Jan 2025 04:29:00 -0500
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zv8s7 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-zv8s7:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 49s default-scheduler Successfully assigned test/pod-nodeaffinity-preferred to k8s-node1
Normal Pulling <invalid> kubelet Pulling image "nginx:1.17.1"
Normal Pulled <invalid> kubelet Successfully pulled image "nginx:1.17.1" in 32.615153362s
Normal Created <invalid> kubelet Created container nginx
Normal Started <invalid> kubelet Started container nginx
PodAffinity(Pod 亲和性)
-
PodAffinity主要实现以运行的Pod为参照,实现让新创建的Pod跟参照pod在一个区域的功能。
-
PodAffinity可选配置
-
requiredDuringSchedulingIgnoredDuringExecution
硬限制-
namespaces:指定参照 Pod 的命名空间。
-
topologyKey:指定调度作用域,例如
kubernetes.io/hostname
(以 Node 节点为区分范围)或beta.kubernetes.io/os
(以 Node 节点的操作系统类型来区分)。 -
labelSelector:标签选择器,用于匹配 Pod 的标签。
-
matchExpressions:按节点标签列出的节点选择器要求列表,包括:
-
key:键
-
values:值
-
operator:关系符,支持
In
、NotIn
、Exists
、DoesNotExist
-
matchLabels:指多个
matchExpressions
映射的内容
-
-
-
preferredDuringSchedulingIgnoredDuringExecution
软限制-
weight:倾向权重,在范围1-100,用于指定这个推荐规则的优先级。
-
podAffinityTerm包含:
-
namespaces
-
topologyKey
-
labelSelector
-
matchExpressions
-
key
-
values
-
operator
-
matchLabels
-
-
-
-
topologyKey用于指定调度时作用域,例如:
-
如果指定为kubernetes.io/hostname,那就是以Node节点为区分范围
-
如果指定为beta.kubernetes.io/os,则以Node节点的操作系统类型来区分
-
-
硬限制配置
-
创建参照Pod
-
# vim pod-podaffinity-target.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: pod-podaffinity-target
namespace: test
labels:
podenv: pro
spec:
containers:
- name: nginx
image: nginx:1.17.1
nodeName: k8s-node1
[root@k8s-master ~]# kubectl apply -f pod-podaffinity-target.yaml
pod/pod-podaffinity-target created
[root@k8s-master ~]# kubectl describe pods pod-podaffinity-target -n test
Name: pod-podaffinity-target
Namespace: test
Priority: 0
Node: k8s-node1/192.168.58.232
Start Time: Thu, 16 Jan 2025 04:58:54 -0500
Labels: podenv=pro
Annotations: cni.projectcalico.org/containerID: 48a68cbe52064a7eb4c3be9db7e24dff3176382ed16d18e9ede5d30312e6425f
cni.projectcalico.org/podIP: 10.244.36.72/32
cni.projectcalico.org/podIPs: 10.244.36.72/32
Status: Running
IP: 10.244.36.72
IPs:
IP: 10.244.36.72
Containers:
nginx:
Container ID: docker://681c85e860b8e04189abd25d42de0e377cc297d73ef7965871631622704ecd19
Image: nginx:1.17.1
Image ID: docker-pullable://nginx@sha256:b4b9b3eee194703fc2fa8afa5b7510c77ae70cfba567af1376a573a967c03dbb
Port: <none>
Host Port: <none>
State: Running
Started: Thu, 16 Jan 2025 04:58:58 -0500
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8vrrt (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-8vrrt:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled <invalid> kubelet Container image "nginx:1.17.1" already present on machine
Normal Created <invalid> kubelet Created container nginx
Normal Started <invalid> kubelet Started container nginx
-
创建建pod-podaffinity-required
[root@k8s-master ~]# vim pod-podaffinity-required.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: pod-podaffinity-required
namespace: test
spec:
containers:
- name: nginx
image: nginx:1.17.1
affinity: #亲和性设置
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: podenv #选择带podenv标签的Pod
operator: In
values: ["xxx","yyy"] #匹配"xxx","yyy"标签
topologyKey: kubernetes.io/hostname
[root@k8s-master ~]# kubectl apply -f pod-podaffinity-required.yaml
pod/pod-podaffinity-required created
[root@k8s-master ~]# kubectl describe pod pod-podaffinity-required -n test
Name: pod-podaffinity-required
Namespace: test
Priority: 0
Node: <none>
Labels: <none>
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
nginx:
Image: nginx:1.17.1
Port: <none>
Host Port: <none>
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l7kjw (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
kube-api-access-l7kjw:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 39s default-scheduler 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match pod affinity rules, 2 node(s) didn't match pod affinity/anti-affinity rules.
Warning FailedScheduling 38s default-scheduler 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match pod affinity rules, 2 node(s) didn't match pod affinity/anti-affinity rules.
[root@k8s-master ~]# vim pod-podaffinity-required.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: pod-podaffinity-required
namespace: test
spec:
containers:
- name: nginx
image: nginx:1.17.1
affinity: #亲和性设置
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: podenv #选择带podenv标签的Pod
operator: In
values: ["pro","yyy"] #匹配"xxx","yyy"标签
topologyKey: kubernetes.io/hostname
[root@k8s-master ~]# kubectl delete -f pod-podaffinity-required.yaml
pod "pod-podaffinity-required" deleted
[root@k8s-master ~]# kubectl apply -f pod-podaffinity-required.yaml
pod/pod-podaffinity-required created
[root@k8s-master ~]# kubectl describe pod pod-podaffinity-required -n test
Name: pod-podaffinity-required
Namespace: test
Priority: 0
Node: k8s-node1/192.168.58.232
Start Time: Thu, 16 Jan 2025 05:09:42 -0500
Labels: <none>
Annotations: cni.projectcalico.org/containerID: c459af771605b41fd74ae294344118acbdc2cd8fed3ae242982506c8eda9ad31
cni.projectcalico.org/podIP: 10.244.36.73/32
cni.projectcalico.org/podIPs: 10.244.36.73/32
Status: Running
IP: 10.244.36.73
IPs:
IP: 10.244.36.73
Containers:
nginx:
Container ID: docker://501cb02e356ddb23e7e11fd48ac0403f83221afbba9d18c608f3415533fe4290
Image: nginx:1.17.1
Image ID: docker-pullable://nginx@sha256:b4b9b3eee194703fc2fa8afa5b7510c77ae70cfba567af1376a573a967c03dbb
Port: <none>
Host Port: <none>
State: Running
Started: Thu, 16 Jan 2025 05:09:45 -0500
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-24cmw (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-24cmw:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6s default-scheduler Successfully assigned test/pod-podaffinity-required to k8s-node1
Normal Pulled <invalid> kubelet Container image "nginx:1.17.1" already present on machine
Normal Created <invalid> kubelet Created container nginx
Normal Started <invalid> kubelet Started container nginx
-
Taints on Nodes: 集群中有一个节点上应用了一个 taint
node-role.kubernetes.io/master:
, 这意味着这个节点不接受任何 Pod 的调度,除非这些 Pod 也被标记为可以容忍这个 taint。这种情况下,如果你的 Pod 没有设置相应的 toleration,就无法被调度到这个节点上。 -
Pod Affinity Rules: Pod 由于满足
requiredDuringSchedulingIgnoredDuringExecution
的 pod affinity 规则而无法被调度到任何节点上。根据事件信息,集群中有 3 个节点没有匹配 pod affinity 规则,这意味着这些节点上没有满足 Pod affinity 规则的 Pod。
PodAntiAffinity 特性(Pod 反亲和性)
-
PodAntiAffinity 是 Kubernetes 中的一种亲和性调度规则,它与 PodAffinity 相反,用于确保带有特定标签的 Pod 不会被调度到同一个节点上。这种规则特别适用于那些天然互斥的应用组件,或者对于那些需要分散以提高容错性和性能的 Pod。
-
PodAntiAffinity主要实现以运行的Pod为参照,让新创建的Pod跟参照pod不在一个区域中的功能。它的配置方式和选项跟PodAffinty是一样的。
[root@k8s-master ~]# vim pod-podantiaffinity-required.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: pod-podantiaffinity-required
namespace: test
spec:
containers:
- name: nginx
image: nginx:1.17.1
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: podenv
operator: In
values: ["pro"]
topologyKey: kubernetes.io/hostname
[root@k8s-master ~]# kubectl apply -f pod-podantiaffinity-required.yaml
pod/pod-podantiaffinity-required created
[root@k8s-master ~]# kubectl get pod pod-podantiaffinity-required -n test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-podantiaffinity-required 1/1 Running 0 9s 10.244.169.131 k8s-node2 <none> <none>
[root@k8s-master ~]# kubectl get pod pod-podantiaffinity-required -n test -o wide --show-labels
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
pod-podantiaffinity-required 1/1 Running 0 19s 10.244.169.131 k8s-node2 <none> <none> <none>
[root@k8s-master ~]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8s-master Ready control-plane,master 21d v1.21.10 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
k8s-node1 Ready <none> 21d v1.21.10 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node1,kubernetes.io/os=linux,nodeenv=pro
k8s-node2 Ready <none> 21d v1.21.10 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node2,kubernetes.io/os=linux,nodeenv=test
新Pod必须要与拥有标签nodeenv=pro的pod不在同一Node上