污点、容忍和数据卷
污点
污点的概念:一旦节点上有污点的标签,那么调度器在部署pod的时候会避开这些有污点标签的节点。
污点的格式:
key:effect key=value:true
effect是污点的类型
[root@master01 ~]# kubectl taint node node01 test1=1:effect [root@master01 ~]# kubectl taint node node01 test1:effect
污点的类型有三种
1、NoSchedule:节点上一旦有这个污点,调度器是不会把pod部署在该节点上的。
2、PreferNoSchedule:尽量避免把pod部署在该节点
3、NoExecute:调度器不仅不会把pod部署在改节点,而且会把该节点上的pod驱逐到其他节点上。
查看污点:
[root@master01 k8s-yaml]# kubectl describe nodes master01
添加污点:
[root@master01 k8s-yaml]# kubectl taint node master01 node-role.kubernetes.io/master:NoSchedule [root@master01 k8s-yaml]# kubectl taint node node01 test1=2:PreferNoSchedule [root@master01 k8s-yaml]# kubectl taint node node02 test1=2:NoExecute
删除污点:
[root@master01 ~]# kubectl taint node master01 node-role.kubernetes.io/master:NoSchedule-
修改污点:
[root@master01 k8s-yaml]# kubectl taint node node01 test1=3:PreferNoSchedule --overwrite
容忍
只能写在yaml脚本中的容器模块中
即使节点上有污点,调度器依然可以把pod部署在有污点的节点。
Equal
apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx1 name: nginx1 spec: replicas: 3 selector: matchLabels: app: nginx1 template: metadata: labels: app: nginx1 spec: containers: - name: nginx image: nginx:1.22 tolerations: - key: "test1" operator: "Equal" value: "2" effect: "NoSchedule" #operator对应的值只有两个: Equal 等于 Exists 包含 #Equal:和节点上的污点的标签 键名 值 污点的类型要完全匹配才能生效。
apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx1 name: nginx1 spec: replicas: 3 selector: matchLabels: app: nginx1 template: metadata: labels: app: nginx1 spec: containers: - name: nginx image: nginx:1.22 tolerations: - key: "test1" operator: "Equal" value: "2" effect: "NoExecute" tolerationSeconds: 10 #指定pod在这个节点上能部署成功之后运行多久被驱逐,单位是秒
Exists
apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx1 name: nginx1 spec: replicas: 3 selector: matchLabels: app: nginx1 template: metadata: labels: app: nginx1 spec: containers: - name: nginx image: nginx:1.22 tolerations: - key: "test1" operator: "Exists" effect: "NoExecute" #exists 不指定key的时候,表示所以的节点只要是NoExecute的标签都可以部署。 #master01 test1=1:NoExecute test1=2:NoExecute test1=3:NoExecute #如果没有effect的时候同理,只看key,其余不看。
不可调度和排水
cordon:直接标记节点为不可用的状态,调度器不会把pod部署该节点
添加不可调度
[root@master01 k8s-yaml]# kubectl cordon node02 [root@master01 k8s-yaml]# kubectl get nodes NAME STATUS ROLES AGE VERSION master01 Ready control-plane,master 45h v1.20.15 node01 Ready <none> 45h v1.20.15 node02 Ready,SchedulingDisabled <none> 45h v1.20.15
删除不可调度:
[root@master01 k8s-yaml]# kubectl uncordon node02
排水:
drain
标记节点为不可调度,而且会把节点上的pod驱逐到其他节点。
创建排水
[root@master01 k8s-yaml]# kubectl drain node02 --ignore-daemonsets --delete-local-data --force --force
--ignore-daemonsets 无视daemonset部署的pod
--delete-local-data 如果被排水的节点上有本地的挂载点,会强制杀死该pod
--force 不是控制器创建的pod会被强制释放
删除排水
[root@master01 k8s-yaml]# kubectl uncordon node02
总结:
master节点一般情况下作为集群的调度者,尽量不部署pod
但是为了资源最大化,master也可以部署
可以设置污点类型:PreferNOSchedule
2、如果集群规模很小,也可以直接用来当节点进行pod部署。
数据卷
容器,pod的生命周期是有限的,一旦重启或者崩溃,数据会丢失
为了保证数据的完整,我们要实现pod内的容器和节点的挂载
volume
emptyDir存储卷
pod分配给节点之前,首先会创建emptyDir卷,只要运行在节点,数据卷或一直存在。
这个数据卷不能和宿主机共享,pod内的容器之间共享,一旦pod重启,emptyDir卷的数据也会一起删除。
适用于:主要用于容器内部组件通信,不涉及敏感数据
apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx1 name: nginx1 spec: replicas: 3 selector: matchLabels: app: nginx1 template: metadata: labels: app: nginx1 spec: containers: - name: nginx1 image: nginx:1.22 volumeMounts: - name: html mountPath: /usr/share/nginx/html #容器内的路径 - name: nginx2 image: nginx:1.22 volumeMounts: - name: html mountPath: /data/ command: ["/bin/bash","-c","while true; do echo $(date) >> /data/index.html; sleep 2; done"] volumes: - name: html emptyDir: {} #容器1的/usr/share/nginx/html和容器2的/data/做挂载,数据卷是emptydir,一旦重启pod数据丢失。 [root@master01 k8s-yaml]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx1-68d4d8db9d-5fn8v 2/2 Running 0 2m37s 10.244.0.7 master01 <none> <none> nginx1-68d4d8db9d-95qvg 2/2 Running 0 2m37s 10.244.2.206 node02 <none> <none> nginx1-68d4d8db9d-gmzl9 2/2 Running 0 2m37s 10.244.1.169 node01 <none> <none
hostPath数据卷
hostPath和节点进行挂载,当pod部署到节点时,就会和节点的指定目录进行挂载
数据可以持久化,但是node节点格式化,数据也会消失
适用于:每个pod运行的服务不同,保留的数据要做区分,这个时候需要用hostPath
apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx1 name: nginx1 spec: replicas: 3 selector: matchLabels: app: nginx1 template: metadata: labels: app: nginx1 spec: containers: - name: nginx1 image: nginx:1.22 volumeMounts: - name: html mountPath: /usr/share/nginx/html #容器内的路径 - name: nginx2 image: nginx:1.22 volumeMounts: - name: html mountPath: /data/ command: ["/bin/bash","-c","while true; do echo $(date) >> /data/index.html; sleep 2; done"] volumes: - name: html hostPath: path: /opt/xy102 #这里的path是节点的目录 type: DirectoryOrCreate
面试题:一个pod中的所有容器共用一个IP共用一个挂载点
nfs共享存储卷
nfs共享存储
集群里的pod相当于客户端
另外一台服务器提供nfs共享
共享一个挂载点,所以的数据也都在这一挂载点。
适用于:nginx的服务 或者是pod的数据是一致的
[root@localhost opt]# mkdir /opt/data1 [root@localhost opt]# chmod 777 /opt/data1/ [root@localhost data1]# vim /etc/exports /opt/data1 192.168.60.0/24(rw,no_root_squash) [root@localhost data1]# systemctl restart rpcbind [root@localhost data1]# systemctl restart nfs
apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx1 name: nginx1 spec: replicas: 3 selector: matchLabels: app: nginx1 template: metadata: labels: app: nginx1 spec: containers: - name: nginx1 image: nginx:1.22 volumeMounts: - name: html mountPath: /usr/share/nginx/html #容器内的路径 - name: nginx2 image: nginx:1.22 volumeMounts: - name: html mountPath: /data/ command: ["/bin/bash","-c","while true; do echo $(date) >> /data/index.html; sleep 2; done"] volumes: - name: html nfs: path: /opt/data1 #这里的path是节点的目录 server: 192.168.60.140 #server既可以用主机名,也可以用ip地址
apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx1 name: nginx1 spec: replicas: 3 selector: matchLabels: app: nginx1 template: metadata: labels: app: nginx1 spec: containers: - name: nginx1 image: nginx:1.22 volumeMounts: - name: html mountPath: /usr/share/nginx/html #容器内的路径 - name: nginx2 image: nginx:1.22 volumeMounts: - name: html mountPath: /data command: ["/bin/bash","-c","while true; do echo $(date) >> /data/index.html; sleep 2; done"] volumes: - name: html nfs: path: /opt/xy103 server: k8s5 --- #表示分段,上一个yml结束,下一个新的yml apiVersion: v1 kind: Service metadata: name: nginx1 # namespace labels: app: nginx1 spec: type: NodePort ports: - port: 80 targetPort: 80 nodePort: 30000 selector: app: nginx1