86、pod部署策略
一、集群的调度
集群的调度:
怎么把pod部署到节点的方法。
1.1、调度的过程:
scheduler是集群的调度器,主要任务就是把pod部署到节点上。
1.2、自动调度:
1、公平,保证每个可用的节点都可以部署pod
2、资源的高效利用,集群当中的资源可用被最大化的使用
3、效率,调度的性能要好,能够对大批量的pod完成调度工作。
4、灵活(自定义),用户需要根据自己的需求进行控制,可以满足。
1.3、调度约束机制:
list-watch机制来进行每个组件的协作,保持数据同步,组件之间的解耦。
list-watch
watch:k8s当中的监听
get:获取资源
apiserver和组件之间的watch监听机制:
二、调度过程:
2.1、默认算法有两种:
1、预算策略
1、预算策略:先对节点的条件进行过滤
pod的资源适应性:节点是否有资源能够满足pod请求的资源
pod的主机适应性:如果指定了节点,检查集群中是否有满足要求的节点可供部署
pod的主机端口适应性:检查节点上使用的端口是否与pod请求的端口冲突
pod与主机磁盘的适应性:每个pod的挂载卷不能冲突。
如果预算条件不满足,pod会进行pending状态
2、优选策略
2、优选策略:根据过滤条件选择一条最优的节点
最低请求优先级:通过计算节点上cpu和内存的使用率,确定节点的权重。使用率越低权重越大,越会被选中作为部署节点。
倾向于选择资源占用较少的节点。
平衡资源分配:cpu和内存的使用率,确定节点的权重。按cpu和内存之间比率,使用率的比率。cpu和内存比越接近1。
A 50% 50% 优先级高
B 10% 20%
镜像本地性优先级:如果节点在本地已经有了需要的镜像,分配的概率更大。
随调度部署nginx1
[root@master01 k8s-yaml]# vim test11.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx1
name: nginx1
spec:
replicas: 3
selector:
matchLabels:
app: nginx1
template:
metadata:
labels:
app: nginx1
spec:
containers:
- name: nginx
image: nginx:1.22
[root@master01 k8s-yaml]# kubectl apply -f test11.yaml
deployment.apps/nginx1 created
[root@master01 k8s-yaml]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx1-654cb56c4-d7tdz 1/1 Running 0 12s 10.244.2.137 node02 <none> <none>
nginx1-654cb56c4-mp8zz 1/1 Running 0 14s 10.244.2.136 node02 <none> <none>
nginx1-654cb56c4-qnlxw 1/1 Running 0 13s 10.244.1.98 node01 <none> <none>
2.2、用户定制节点部署:(重要)
1、强制性节点调度:
nodeName强制性的选择一个节点,不再需要调度器和算法了。之间部署即可。
[root@master01 k8s-yaml]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
master01 Ready control-plane,master 8d v1.20.15 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master01,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=
node01 Ready <none> 8d v1.20.15 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node01,kubernetes.io/os=linux,test1=a,test2=b,test3=b
node02 Ready <none> 8d v1.20.15 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node02,kubernetes.io/os=linux,test2=b
强制性布置在node01
[root@master01 k8s-yaml]# vim test11.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx1
name: nginx1
spec:
replicas: 3
selector:
matchLabels:
app: nginx1
template:
metadata:
labels:
app: nginx1
spec:
containers:
- name: nginx
image: nginx:1.22
nodeName: node01
[root@master01 k8s-yaml]# kubectl apply -f test11.yaml --force
[root@master01 k8s-yaml]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx1-95bf57c7f-nrbjb 1/1 Running 0 48s 10.244.1.101 node01 <none> <none>
nginx1-95bf57c7f-phxx5 1/1 Running 0 50s 10.244.1.99 node01 <none> <none>
nginx1-95bf57c7f-zl78g 1/1 Running 0 49s 10.244.1.100 node01 <none> <none>
2.3、查看节点的标签
[root@master01 k8s-yaml]# kubectl get nodes --show-labels
标签的格式是键值对
一个节点可以有多个标签,每个以逗号隔开
[root@master01 k8s-yaml]# kubectl label nodes node01 test=a ##增加标签
node/node01 labeled
[root@master01 k8s-yaml]# kubectl label nodes node01 test=b --overwrite ##改标签
node/node01 labeled
[root@master01 k8s-yaml]# kubectl label nodes node01 test- ##删除标签
node/node01 labeled
自定义标签
[root@master01 k8s-yaml]# kubectl label nodes node01 test1=a
node/node01 labeled
[root@master01 k8s-yaml]# kubectl label nodes node02 test2=b
node/node02 labeled
2.4、根据节点的标签来进行部署。匹配机制,只要标签匹配都可以部署
根据节点的标签来进行部署。匹配机制,只要标签匹配都可以部署。
问题:标签选择节点是否需要调度器和算法
需要调度器和算法来进行分配的。
[root@master01 k8s-yaml]# vim test12.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx1
name: nginx1
spec:
replicas: 3
selector:
matchLabels:
app: nginx1
template:
metadata:
labels:
app: nginx1
spec:
containers:
- name: nginx
image: nginx:1.22
nodeSelector:
test2: b
[root@master01 k8s-yaml]# kubectl label nodes node01 test3=b
node/node01 labeled
[root@master01 k8s-yaml]# kubectl apply -f test12.yaml
[root@master01 k8s-yaml]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx1-76bd776789-74j2f 1/1 Running 0 21s 10.244.2.138 node02 <none> <none>
nginx1-76bd776789-qkjtc 1/1 Running 0 19s 10.244.2.139 node02 <none> <none>
nginx1-76bd776789-z7885 1/1 Running 0 18s 10.244.2.140 node02 <none> <none>
[root@master01 k8s-yaml]# kubectl label nodes node01 test2=b
[root@master01 k8s-yaml]# vim test12.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx1
name: nginx1
spec:
replicas: 3
selector:
matchLabels:
app: nginx1
template:
metadata:
labels:
app: nginx1
spec:
containers:
- name: nginx
image: nginx:1.22
nodeSelector:
test2: b
[root@master01 k8s-yaml]# kubectl apply -f test12.yaml
[root@master01 k8s-yaml]# kubectl get pod -o wide
[root@master01 k8s-yaml]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx1-76bd776789-5dssm 1/1 Running 0 9s 10.244.2.141 node02 <none> <none>
nginx1-76bd776789-mpk42 1/1 Running 0 9s 10.244.1.102 node01 <none> <none>
nginx1-76bd776789-zpbkp 1/1 Running 0 9s 10.244.2.142 node02 <none> <none>
##同时都有自定义标签,随机部署在两个节点上
三、亲和性
节点亲和性:node Affinity
pod亲和性:pod Affinity
软策略和硬策略:
软策略:
preferredDuringSchedulingIgnoredDuringExecution
软策略在选择节点尽量满足部署的条件,非条件也可以部署。
硬策略:
requiredDuringSchedulingIgnoredDuringExecution
硬策略:必须满足指定的节点条件,否则pending
1、条件不满足,肯定pending
2、条件满足,调度器即可生效
3、需要调度器的分配,不同节点可以有相同的标签。需要调度器分配。
根据节点标签和pod的标签来进行选择:
键值的运算关系:
1、In 在 匹配 =
2、Notin 不在 不等于 ,逻辑非
3、Gt 大于
4、Lt 小于
5、Exists 存在
6、DoesNotExist 不存在
3.1、亲和性:
topologkey 定义节点的拓扑域,用来反映pod和节点之间的关系。
硬策略
[root@master01 k8s-yaml]# vim test12.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx1
name: nginx1
spec:
replicas: 3
selector:
matchLabels:
app: nginx1
template:
metadata:
labels:
app: nginx1
spec:
containers:
- name: nginx
image: nginx:1.22
affinity:
#选择亲和性的字段
nodeAffinity:
#节点亲和性
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: test1
operator: In
values:
- a
#节点亲和性的硬策略,表示必须选择带有标签的值是test1=a
[root@master01 k8s-yaml]# kubectl apply -f test12.yaml
[root@master01 k8s-yaml]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx1-8f7b9d885-4ds5z 1/1 Running 0 21s 10.244.1.103 node01 <none> <none>
nginx1-8f7b9d885-c9zbc 1/1 Running 0 19s 10.244.1.105 node01 <none> <none>
nginx1-8f7b9d885-cwgdj 1/1 Running 0 20s 10.244.1.104 node01 <none> <none>
[root@master01 k8s-yaml]# vim test12.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx1
name: nginx1
spec:
replicas: 3
selector:
matchLabels:
app: nginx1
template:
metadata:
labels:
app: nginx1
spec:
containers:
- name: nginx
image: nginx:1.22
affinity:
#选择亲和性的字段
nodeAffinity:
#节点亲和性
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: test1
operator: NotIn
values:
- a
#节点亲和性的硬策略,表示必须选择带有标签的值不是test1=a
[root@master01 k8s-yaml]# kubectl apply -f test12.yaml
[root@master01 k8s-yaml]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx1-7f4db4ff6b-4pg5w 1/1 Running 0 75s 10.244.2.145 node02 <none> <none>
nginx1-7f4db4ff6b-dslv4 1/1 Running 0 76s 10.244.2.143 node02 <none> <none>
nginx1-7f4db4ff6b-lmrjk 1/1 Running 0 76s 10.244.2.144 node02 <none> <none>
试验:
[root@master01 k8s-yaml]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
master01 Ready control-plane,master 8d v1.20.15 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master01,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=
node01 Ready <none> 8d v1.20.15 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node01,kubernetes.io/os=linux,test1=a,test2=b,test3=b
node02 Ready <none> 8d v1.20.15 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node02,kubernetes.io/os=linux,test1=a,test2=b
[root@master01 k8s-yaml]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx1-7f4db4ff6b-99l2r 0/1 Pending 0 10s <none> <none> <none> <none>
nginx1-7f4db4ff6b-gjfkd 0/1 Pending 0 10s <none> <none> <none> <none>
nginx1-7f4db4ff6b-hsks5 0/1 Pending 0 10s <none> <none> <none> <none>
[root@master01 k8s-yaml]# kubectl label nodes node02 test1-
node/node02 labeled
[root@master01 k8s-yaml]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
master01 Ready control-plane,master 8d v1.20.15 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master01,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=
node01 Ready <none> 8d v1.20.15 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node01,kubernetes.io/os=linux,test1=a,test2=b,test3=b
node02 Ready <none> 8d v1.20.15 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node02,kubernetes.io/os=linux,test2=b
[root@master01 k8s-yaml]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx1-7f4db4ff6b-99l2r 1/1 Running 0 93s 10.244.2.146 node02 <none> <none>
nginx1-7f4db4ff6b-gjfkd 1/1 Running 0 93s 10.244.2.147 node02 <none> <none>
nginx1-7f4db4ff6b-hsks5 1/1 Running 0 93s 10.244.2.148 node02 <none> <none>
条件满足,不需要强制重启,
[root@master01 k8s-yaml]# kubectl get pod -o wide即可
软策略:
单个权重
[root@master01 k8s-yaml]# vim test12.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx1
name: nginx1
spec:
replicas: 3
selector:
matchLabels:
app: nginx1
template:
metadata:
labels:
app: nginx1
spec:
containers:
- name: nginx
image: nginx:1.22
affinity:
#选择亲和性的字段
nodeAffinity:
#节点亲和性
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: test1
operator: NotIn
values:
- a
#节点的亲和性的软策略,希望部署到不包含test1=a的标签节点。
[root@master01 k8s-yaml]# kubectl apply -f test12.yaml
[root@master01 k8s-yaml]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx1-7b45d5b8f-hwxgj 1/1 Running 0 17s 10.244.2.149 node02 <none> <none>
nginx1-7b45d5b8f-x24j4 1/1 Running 0 15s 10.244.2.150 node02 <none> <none>
nginx1-7b45d5b8f-zbdvq 1/1 Running 0 14s 10.244.1.106 node01 <none> <none>
##软限制,只能做一点点资源调度,不能强制
##podAffinity
[root@master01 k8s-yaml]# vim test13.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx2
name: nginx2
spec:
replicas: 3
selector:
matchLabels:
app: nginx2
template:
metadata:
labels:
app: nginx2
spec:
containers:
- name: nginx
image: nginx:1.22
affinity:
podAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx1
topologyKey: test3
[root@master01 k8s-yaml]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
master01 Ready control-plane,master 8d v1.20.15 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master01,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=
node01 Ready <none> 8d v1.20.15 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node01,kubernetes.io/os=linux,memory=1000,test1=a,test3=b
node02 Ready <none> 8d v1.20.15 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node02,kubernetes.io/os=linux,test2=b,xy102=98
[root@master01 k8s-yaml]# kubectl apply -f test13.yaml
[root@master01 k8s-yaml]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx1-654cb56c4-7t69j 1/1 Running 0 153m 10.244.1.110 node01 <none> <none>
nginx1-654cb56c4-d8q56 1/1 Running 0 153m 10.244.1.109 node01 <none> <none>
nginx1-654cb56c4-q89zj 1/1 Running 0 153m 10.244.2.152 node02 <none> <none>
nginx2-6b6584f58f-2dr9n 1/1 Running 0 4m27s 10.244.1.120 node01 <none> <none>
nginx2-6b6584f58f-7vntk 1/1 Running 0 4m27s 10.244.1.121 node01 <none> <none>
nginx2-6b6584f58f-zbhv5 1/1 Running 0 4m27s 10.244.2.186 node02 <none> <none>
多个权重
[root@master01 k8s-yaml]# vim test12.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx1
name: nginx1
spec:
replicas: 3
selector:
matchLabels:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx1
name: nginx1
spec:
replicas: 3
selector:
matchLabels:
app: nginx1
template:
metadata:
labels:
app: nginx1
spec:
containers:
- name: nginx
image: nginx:1.22
affinity:
#选择亲和性的字段
nodeAffinity:
#节点亲和性
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: test1
operator: NotIn
values:
- a
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: test2
operator: NotIn
values:
- b
#多个软策略以权重来进行分配,权重高的,优先级大
[root@master01 k8s-yaml]# kubectl apply -f test12.yaml
deployment.apps/nginx1 configured
[root@master01 k8s-yaml]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx1-698c85f477-475q8 1/1 Running 0 17s 10.244.1.107 node01 <none> <none>
nginx1-698c85f477-jssbs 1/1 Running 0 15s 10.244.2.151 node02 <none> <none>
nginx1-698c85f477-pc9m7 1/1 Running 0 14s 10.244.1.108 node01 <none> <none>
#多个软策略以权重来进行分配,权重高的,优先级大,weight=2>weight=1,使得在node01部署较多,不是绝对部署在node01
[root@master01 k8s-yaml]# vim test12.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx1
name: nginx1
spec:
replicas: 3
selector:
matchLabels:
app: nginx1
template:
metadata:
labels:
app: nginx1
spec:
containers:
- name: nginx
image: nginx:1.22
affinity:
#选择亲和性的字段
nodeAffinity:
#节点亲和性
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: test1
operator: NotIn
values:
- a
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: test2
operator: NotIn
values:
- b
#如果已经有硬策略,一般不需要声明软策略。
[root@master01 k8s-yaml]# kubectl apply -f test12.yaml
deployment.apps/nginx1 configured
[root@master01 k8s-yaml]# kubectl get pod -o wide
#匹配的pod的标签是app=nginx1,且节点上包含标签名是test2
[root@master01 k8s-yaml]# kubectl describe deployments.apps nginx1
Labels: app=nginx1
[root@master01 k8s-yaml]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
master01 Ready control-plane,master 8d v1.20.15 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master01,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=
node01 Ready <none> 8d v1.20.15 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node01,kubernetes.io/os=linux,test1=a,test3=b
node02 Ready <none> 8d v1.20.15 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node02,kubernetes.io/os=linux,test2=b
[root@master01 k8s-yaml]# vim test13.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx2
name: nginx2
spec:
replicas: 3
selector:
matchLabels:
app: nginx2
template:
metadata:
labels:
app: nginx2
spec:
containers:
- name: nginx
image: nginx:1.22
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
#根据标签进行选择
matchExpressions:
- key: app
operator: In
values:
- nginx1
topologyKey: test2
#匹配的pod的标签是app=nginx1,且节点上包含标签名是test2
[root@master01 k8s-yaml]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx1-654cb56c4-7t69j 1/1 Running 0 4m36s 10.244.1.110 node01 <none> <none>
nginx1-654cb56c4-d8q56 1/1 Running 0 4m36s 10.244.1.109 node01 <none> <none>
nginx1-654cb56c4-q89zj 1/1 Running 0 4m36s 10.244.2.152 node02 <none> <none>
nginx2-8c5695d94-jx7sr 1/1 Running 0 4s 10.244.2.155 node02 <none> <none>
nginx2-8c5695d94-qtcf6 1/1 Running 0 4s 10.244.2.153 node02 <none> <none>
nginx2-8c5695d94-xwg6b 1/1 Running 0 4s 10.244.2.154 node02 <none> <none>
3.2、反亲和性:
pod反亲和性 pod Anti-Affinity
[root@master01 k8s-yaml]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
master01 Ready control-plane,master 8d v1.20.15 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master01,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=
node01 Ready <none> 8d v1.20.15 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node01,kubernetes.io/os=linux,test1=a,test3=b
node02 Ready <none> 8d v1.20.15 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node02,kubernetes.io/os=linux,test2=b
[root@master01 k8s-yaml]# vim test13.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx2
name: nginx2
spec:
replicas: 3
selector:
matchLabels:
app: nginx2
template:
metadata:
labels:
app: nginx2
spec:
containers:
- name: nginx
image: nginx:1.22
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
#根据标签进行选择
matchExpressions:
- key: app
operator: In
values:
- nginx1
topologyKey: test2
##
[root@master01 k8s-yaml]# kubectl apply -f test13.yaml
[root@master01 k8s-yaml]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx1-654cb56c4-7t69j 1/1 Running 0 13m 10.244.1.110 node01 <none> <none>
nginx1-654cb56c4-d8q56 1/1 Running 0 13m 10.244.1.109 node01 <none> <none>
nginx1-654cb56c4-q89zj 1/1 Running 0 13m 10.244.2.152 node02 <none> <none>
nginx2-5784d9dcb-bnxcb 1/1 Running 0 72s 10.244.1.111 node01 <none> <none>
nginx2-5784d9dcb-f6pvb 1/1 Running 0 70s 10.244.1.112 node01 <none> <none>
nginx2-5784d9dcb-rpsjk 1/1 Running 0 69s 10.244.1.113 node01 <none> <none>
#只能部署在pod的标签不是app=nginx1且节点的标签名不能有test2
#其实在pod的亲和性当中,起决定作用的是拓扑域的标签。
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 22m (x2 over 22m) default-scheduler 0/3 nodes are available: 3 node(s) didn't match pod affinity/anti-affinity, 3 node(s) didn't match pod anti-affinity rules.
Normal Scheduled 22m default-scheduler Successfully assigned default/nginx2-5bdc766cf7-sk69d to master01
#能够部署的原因在于调度器,调度器的核心是为了部署pod,只要能够部署,调度器有可能会忽略规则进行部署,既然有规则,调度器依然会部署
#资源不足的情况下,为了pod能够部署,调度器极有可能忽略所有的限制。
软策略:倾向性,尽可能的满足条件----更多的,尽量把资源调度到需要的节点
硬策略:必须要满足条件。-----特殊情况,节点故障,但是有业务要更新,强制性的把资源调度到指定的节点。
[root@master01 k8s-yaml]# vim test13.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx2
name: nginx2
spec:
replicas: 3
selector:
matchLabels:
app: nginx2
template:
metadata:
labels:
app: nginx2
spec:
containers:
- name: nginx
image: nginx:1.22
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: test3
operator: NotIn
values:
- b
[root@master01 k8s-yaml]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx1-654cb56c4-7t69j 1/1 Running 0 24m 10.244.1.110 node01 <none> <none>
nginx1-654cb56c4-d8q56 1/1 Running 0 24m 10.244.1.109 node01 <none> <none>
nginx1-654cb56c4-q89zj 1/1 Running 0 24m 10.244.2.152 node02 <none> <none>
nginx2-5d77fdc494-2lxc6 1/1 Running 0 39s 10.244.2.157 node02 <none> <none>
nginx2-5d77fdc494-7hswd 1/1 Running 0 38s 10.244.2.158 node02 <none> <none>
nginx2-5d77fdc494-9ckqj 1/1 Running 0 40s 10.244.2.156 node02 <none> <none>
Gt用法
[root@master01 k8s-yaml]# kubectl label nodes node01 memory=1000
node/node01 labeled
[root@master01 k8s-yaml]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
master01 Ready control-plane,master 8d v1.20.15 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master01,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=
node01 Ready <none> 8d v1.20.15 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node01,kubernetes.io/os=linux,memory=1000,test1=a,test3=b
node02 Ready <none> 8d v1.20.15 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node02,kubernetes.io/os=linux,test2=b
[root@master01 k8s-yaml]# vim test13.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx2
name: nginx2
spec:
replicas: 3
selector:
matchLabels:
app: nginx2
template:
metadata:
labels:
app: nginx2
spec:
containers:
- name: nginx
image: nginx:1.22
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: memory
operator: Gt
values:
- "500"
[root@master01 k8s-yaml]# kubectl apply -f test13.yaml
[root@master01 k8s-yaml]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx1-654cb56c4-7t69j 1/1 Running 0 43m 10.244.1.110 node01 <none> <none>
nginx1-654cb56c4-d8q56 1/1 Running 0 43m 10.244.1.109 node01 <none> <none>
nginx1-654cb56c4-q89zj 1/1 Running 0 43m 10.244.2.152 node02 <none> <none>
nginx2-79b79dd568-gpjx5 1/1 Running 0 56s 10.244.1.116 node01 <none> <none>
nginx2-79b79dd568-rhjs4 1/1 Running 0 58s 10.244.1.114 node01 <none> <none>
nginx2-79b79dd568-trn54 1/1 Running 0 57s 10.244.1.115 node01 <none> <none>
[root@master01 k8s-yaml]# vim test13.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx2
name: nginx2
spec:
replicas: 3
selector:
matchLabels:
app: nginx2
template:
metadata:
labels:
app: nginx2
spec:
containers:
- name: nginx
image: nginx:1.22
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: memory
operator: DoesNotExist
[root@master01 k8s-yaml]# kubectl apply -f test13.yaml
[root@master01 k8s-yaml]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx1-654cb56c4-7t69j 1/1 Running 0 46m 10.244.1.110 node01 <none> <none>
nginx1-654cb56c4-d8q56 1/1 Running 0 46m 10.244.1.109 node01 <none> <none>
nginx1-654cb56c4-q89zj 1/1 Running 0 46m 10.244.2.152 node02 <none> <none>
nginx2-585c559d74-ccqt6 1/1 Running 0 19s 10.244.2.159 node02 <none> <none>
nginx2-585c559d74-k4vxw 1/1 Running 0 18s 10.244.2.160 node02 <none> <none>
nginx2-585c559d74-zv6z5 1/1 Running 0 17s 10.244.2.161 node02 <none> <none>
四、作业:
1、实现pod的探针:
就绪探针
tcpSocket
2、挂载,容器 /usr/share/nginx/html
节点 /opt/html
3、node的亲和性 尽量部署在node01
4、pod的亲和性,尽量部署包含有app=nginx1的pod且目标签名是xy102的节点。
5、软策略选择标签名包含xy102, 值小于100。
[root@master01 k8s-yaml]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
master01 Ready control-plane,master 8d v1.20.15 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master01,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=
node01 Ready <none> 8d v1.20.15 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node01,kubernetes.io/os=linux,memory=1000,test1=a,test3=b
node02 Ready <none> 8d v1.20.15 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node02,kubernetes.io/os=linux,test2=b,xy102=98
[root@master01 k8s-yaml]# vim test5.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx2
name: nginx2
spec:
replicas: 12
selector:
matchLabels:
app: nginx2
template:
metadata:
labels:
app: nginx2
spec:
containers:
- name: nginx
image: nginx:1.22
volumeMounts:
- name: data-v
mountPath: /usr/share/nginx/html
readinessProbe:
tcpSocket:
port: 80
volumes:
- name: data-v
hostPath:
path: /opt/html
type: DirectoryOrCreate
affinity:
#选择亲和性的字段
nodeAffinity:
#节点亲和性
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 3
preference:
matchExpressions:
- key: test1
operator: In
values:
- a
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: xy102
operator: Lt
values:
- "100"
podAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx1
topologyKey: xy102
[root@master01 k8s-yaml]# kubectl apply -f test5.yaml
[root@master01 k8s-yaml]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx1-654cb56c4-7t69j 1/1 Running 0 87m 10.244.1.110 node01 <none> <none>
nginx1-654cb56c4-d8q56 1/1 Running 0 87m 10.244.1.109 node01 <none> <none>
nginx1-654cb56c4-q89zj 1/1 Running 0 87m 10.244.2.152 node02 <none> <none>
nginx2-6545dc89c4-4gdmz 1/1 Running 0 90s 10.244.1.119 node01 <none> <none>
nginx2-6545dc89c4-675ch 1/1 Running 0 90s 10.244.2.176 node02 <none> <none>
nginx2-6545dc89c4-8kz6b 1/1 Running 0 90s 10.244.2.184 node02 <none> <none>
nginx2-6545dc89c4-b66r6 1/1 Running 0 90s 10.244.2.183 node02 <none> <none>
nginx2-6545dc89c4-fc8bj 1/1 Running 0 90s 10.244.2.178 node02 <none> <none>
nginx2-6545dc89c4-fp6nf 1/1 Running 0 90s 10.244.2.177 node02 <none> <none>
nginx2-6545dc89c4-klxfc 1/1 Running 0 90s 10.244.2.180 node02 <none> <none>
nginx2-6545dc89c4-llp4c 1/1 Running 0 90s 10.244.2.182 node02 <none> <none>
nginx2-6545dc89c4-lw8l5 1/1 Running 0 90s 10.244.2.181 node02 <none> <none>
nginx2-6545dc89c4-nrtwx 1/1 Running 0 90s 10.244.2.185 node02 <none> <none>
nginx2-6545dc89c4-ssqrn 1/1 Running 0 90s 10.244.1.118 node01 <none> <none>
nginx2-6545dc89c4-w49lx 1/1 Running 0 90s 10.244.2.179 node02 <none> <none>