Part2:基于国内源完成Kubernetes集群部署
引言
通过Part1:基于国内源完成Kubernetes集群部署的操作,我们完成了集群部署的所有准备工作,接下来就是开始安装集群。
先检查下相关容器组件是否已就绪
[root@master101 ~]# cri-dockerd --buildinfo
Program: cri-dockerd
Version: 0.3.16 (7119864)
GitCommit: 7119864
Go version: go1.23.3
[root@master101 ~]# docker -v
Docker version 26.1.4, build 5650f9b
[root@master101 ~]# systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Active: active (running) since 六 2025-03-15 13:26:50 CST; 6s ago
Docs: https://docs.docker.com
Main PID: 112718 (dockerd)
Memory: 135.6M
CGroup: /system.slice/docker.service
└─112718 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
[root@master101 ~]# systemctl status cri-docker.service
● cri-docker.service - CRI Interface for Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/cri-docker.service; enabled; vendor preset: disabled)
Active: active (running) since 六 2025-03-15 13:27:39 CST; 5min ago
查看k8s组件是否就绪
[root@master101 ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"30", GitVersion:"v1.30.11", GitCommit:"6a074997c960757de911780f250ecd9931917366", GitTreeState:"clean", BuildDate:"2025-03-11T19:56:25Z", GoVersion:"go1.23.6", Compiler:"gc", Platform:"linux/amd64"}
[root@master101 ~]# kubectl version
Client Version: v1.30.11
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
The connection to the server localhost:8080 was refused - did you specify the right host or port?
[root@master101 ~]# kubelet --version
Kubernetes v1.30.11
一切就绪准备安装集群
安装Kubernetes集群
初始化主键
在master101节点上初始化master节点,这里注意下IP地址,需要提前规划好。注意我们需要指定容器运行时的socket文件,告知k8s集群我们是使用docker作为容器的。
kubeadm init --apiserver-advertise-address=192.168.85.101 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.30.11 \
--service-cidr=10.140.0.0/16 \
--pod-network-cidr=10.240.0.0/16 \
--cri-socket=unix:///var/run/cri-dockerd.sock
安静等待master节点的初始化,如果初始化失败则需要去排查下失败的原因, 一般可能是cri-docker未启动。一般镜像都是可以成功拉取的,只是网络快慢的问题,最后可以通过reset命令来完成清理。当master节点初始化成功之后,会自动生成加入集群的命令。
kubeadm join 192.168.85.101:6443 --token x72ybi.qjtglon7mc4scnvq \
--discovery-token-ca-cert-hash sha256:f0246d381a782228f7c99a0c11409cdcb7d15b96bb2bcbbee94525
如果我们忘记了这个加入节点的token,可以通过 kubeadm token create
来重新生成加入节点的token。只是一旦重新生成了加入节点的token,旧的token就会失效,这点是需要注意的。
[root@master101 ~]# kubeadm token create --print-join-command
kubeadm join 192.168.85.101:6443 --token xjzu6l.3mte2xhl2lgwya5n --discovery-token-ca-cert-hash sha256:f0246d381a782228f7c99a0c11409cdcb7d15b96bb2bcbbee94525537c330853
重置集群
kubeadm reset --cri-socket=unix:///var/run/cri-dockerd.sock
在kubeadm初始化master节点的过程中,会打印如下信息。这告诉我们要想其他用户能够使用集群,就需要执行以下命令。这其实就是在配置kubectl的配置,不进行配置的话,kubectl就无法找到我们的集群。在后面的步骤将进行演示。
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
工作节点加入集群
在node102和node103中分别执行加入节点的命令加入集群
[root@node102 ~]# kubeadm join 192.168.85.101:6443 --token xjzu6l.3mte2xhl2lgwya5n --discovery-token-ca-cert-hash sha256:f0246d381a782228f7c99a0c11409cdcb7d15b96bb2bcbbee94525537c330853 --cri-socket=unix:///var/run/cri-docker.sock
[root@node103 ~]# kubeadm join 192.168.85.101:6443 --token xjzu6l.3mte2xhl2lgwya5n --discovery-token-ca-cert-hash sha256:f0246d381a782228f7c99a0c11409cdcb7d15b96bb2bcbbee94525537c330853 --cri-socket=unix:///var/run/cri-dockerd.sock
在这里我们是需要指定CRI的socket文件,告诉K8S我们使用的是以docker作为容器。
配置kubectl
在master101完成节点的初始化之后,我们就可以通过kubectl来查看集群的信息。但实际上直接使用kubectl是失败的,因为目前kubectl的配置都是默认的,并不是我们集群的配置。
[root@master101 ~]# kubectl get nodes
E0315 14:10:02.950851 70455 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
这也是为什么kubeadm在初始化的过程中打印出:如果需要其他用户能够使用集群,就需要执行相关的命令。
[root@master101 ~]# mkdir -p $HOME/.kube && cp -i /etc/kubernetes/admin.conf $HOME/.kube/config && chown $(id -u):$(id -g) $HOME/.kube/config
[root@master101 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master101 NotReady control-plane 28m v1.30.11
node102 NotReady <none> 8m33s v1.30.11
node103 NotReady <none> 6m28s v1.30.11
为了让node102和node103也能执行kubectl命令来完成集群的管理,我们就需要把客户端的文件复制到node102和node103节点上。
- master101节点执行
[root@master101 ~]# scp /etc/kubernetes/admin.conf root@node102:/etc/kubernetes/ [root@master101 ~]# scp /etc/kubernetes/admin.conf root@node103:/etc/kubernetes/
- node102节点执行
[root@node102 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" > /etc/profile.d/kubeconfig.sh [root@node102 ~]# source /etc/profile.d/kubeconfig.sh [root@node102 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master101 NotReady control-plane 32m v1.30.11 node102 NotReady <none> 13m v1.30.11 node103 NotReady <none> 11m v1.30.11
- node103节点执行
[root@node103 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" > /etc/profile.d/kubeconfig.sh [root@node103 ~]# source /etc/profile.d/kubeconfig.sh [root@node103 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master101 NotReady control-plane 33m v1.30.11 node102 NotReady <none> 13m v1.30.11 node103 NotReady <none> 11m v1.30.11
- 注意
实际上kubectl不应该是在集群中的任意一个节点安装的,因为在集群上我们更希望的是可以完成容器的调度,所有的客户端我们都更推荐放在一台专属的工作节点上执行,一般我们会称其为workstation节点,这个节点用于管理各种文件。只是因为这里是自行部署学习技术使用,方便我们自行使用,所以就都安装了。
安装CNI网络插件
[root@master101 k8s]# kubectl get po -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-cb4864fb5-65vj5 0/1 Pending 0 51m <none> <none> <none> <none>
kube-system coredns-cb4864fb5-78fcn 0/1 Pending 0 51m <none> <none> <none> <none>
kube-system etcd-master101 1/1 Running 0 51m 192.168.85.101 master101 <none> <none>
kube-system kube-apiserver-master101 1/1 Running 0 51m 192.168.85.101 master101 <none> <none>
kube-system kube-controller-manager-master101 1/1 Running 0 51m 192.168.85.101 master101 <none> <none>
kube-system kube-proxy-8c6vw 1/1 Running 0 30m 192.168.85.103 node103 <none> <none>
kube-system kube-proxy-nd5g4 1/1 Running 0 32m 192.168.85.102 node102 <none> <none>
kube-system kube-proxy-qxvpj 1/1 Running 0 51m 192.168.85.101 master101 <none> <none>
kube-system kube-scheduler-master101 1/1 Running 0 51m 192.168.85.101 master101 <none> <none>
[root@master101 k8s]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master101 NotReady control-plane 53m v1.30.11
node102 NotReady <none> 33m v1.30.11
node103 NotReady <none> 31m v1.30.11
查看下我们的pod,这时候我们发现有些pod是没有IP地址的,并且集群中节点都是未就绪的,这是因为还没有安装网络插件。网络插件目前常用就是flannel和calico了。
因为我们的集群比较简单,不需要什么复杂的网络,所以我们直接安装flannel即可。
[root@master101 k8s]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
获取到资源文件之后,我们需要修改文件中的网段配置与我们的规划是一致的。我们的pod的网络段应该是10.240.0.0/16
[root@master101 k8s]# cat kube-flannel.yml | grep Network
"Network": "10.244.0.0/16",
hostNetwork: true
[root@master101 k8s]# vim kube-flannel.yml
[root@master101 k8s]# cat kube-flannel.yml | grep Network
"Network": "10.240.0.0/16",
hostNetwork: true
[root@master101 k8s]# kubectl apply -f kube-flannel.yml
[root@master101 k8s]# watch kubectl get po -n kube-flannel -o wide
Every 2.0s: kubectl get po -n kube-flannel -o wide Sat Mar 15 14:38:46 2025
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-flannel-ds-bnz2k 0/1 Init:1/2 0 2m57s 192.168.85.101 master101 <none> <none>
kube-flannel-ds-n6rx8 0/1 Init:1/2 0 2m57s 192.168.85.102 node102 <none> <none>
kube-flannel-ds-wl4k9 0/1 Init:1/2 0 2m57s 192.168.85.103 node103 <none> <none>
等待flannel相关的pod完成初始化之后,再次查看节点状态以及pod的状态
[root@master101 k8s]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master101 Ready control-plane 78m v1.30.11
node102 Ready <none> 58m v1.30.11
node103 Ready <none> 56m v1.30.11
[root@master101 k8s]# kubectl get po -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-flannel kube-flannel-ds-bnz2k 1/1 Running 0 25m 192.168.85.101 master101 <none> <none>
kube-flannel kube-flannel-ds-n6rx8 1/1 Running 0 25m 192.168.85.102 node102 <none> <none>
kube-flannel kube-flannel-ds-wl4k9 1/1 Running 0 25m 192.168.85.103 node103 <none> <none>
kube-system coredns-cb4864fb5-65vj5 1/1 Running 0 79m 10.240.1.3 node102 <none> <none>
kube-system coredns-cb4864fb5-78fcn 1/1 Running 0 79m 10.240.1.2 node102 <none> <none>
说明flannel正常安装,集群网络已正常。目前为止k8s集群就算部署结束了。
安装Dashboard
kubernetes-dashboard与kubernetes之间的版本存在一定的绑定关系,如果版本差距太大可能存咋一定的兼容性问题,导致dashboard一些错误。
具体的兼容性参考kubernetes-dashboard发布。目前kubenetes1.30的版本与kubernetes-dashboard-7.6.1是完全适配的。在这里我们使用helm来安装dashboard。
安装helm
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
拉取dasboard的helm的包
helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard --version 7.6.1
直接安装如果不使用魔法,大概率都是失败的,因为镜像都是被“墙”了,这时候我们可以先下载chars然后再修改镜像源。
[root@master101 ~]# helm pull kubernetes-dashboard/kubernetes-dashboard --version 7.6.1
[root@master101 kubernetes-dashboard]# tar -vxf kubernetes-dashboard-7.6.1.tgz
[root@master101 kubernetes-dashboard]# cat values.yaml | grep -E 'image:' -A 2
image:
pullPolicy: IfNotPresent
pullSecrets: []
--
image:
repository: docker.io/kubernetesui/dashboard-auth
tag: 1.1.3
--
image:
repository: docker.io/kubernetesui/dashboard-api
tag: 1.8.1
--
image:
repository: docker.io/kubernetesui/dashboard-web
tag: 1.4.0
--
image:
repository: docker.io/kubernetesui/dashboard-metrics-scraper
tag: 1.1.1
[root@master101 kubernetes-dashboard]# vim values.yaml
[root@master101 kubernetes-dashboard]# cat values.yaml | grep -E 'image:' -A 2
image:
pullPolicy: IfNotPresent
pullSecrets: []
--
image:
repository: swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kubernetesui/dashboard-auth
tag: 1.1.3
--
image:
repository: swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kubernetesui/dashboard-api
tag: 1.8.1
--
image:
repository: swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kubernetesui/dashboard-web
tag: 1.4.0
--
image:
repository: swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kubernetesui/dashboard-metrics-scraper
tag: 1.1.1
[root@master101 kubernetes-dashboard]# cat charts/kong/values.yaml | grep -E 'repository:' -A 1
repository: kong
tag: "3.6"
--
# repository: kong/kong-gateway
# tag: "3.5"
--
repository: kong/kubernetes-ingress-controller
tag: "3.1"
--
# repository: bash
# tag: 5
[root@master101 kubernetes-dashboard]# cat charts/kong/values.yaml | grep -E 'repository:' -A 1
repository: swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kong
tag: "3.6"
--
# repository: kong/kong-gateway
# tag: "3.5"
--
repository: swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kong/kubernetes-ingress-controller
tag: "3.1"
--
# repository: bash
# tag: 5
拉取dashboard相关镜像
因为dashboard的相关镜像实际上都是被“墙”了,或许我们可以使用“魔法”来完成,这是最简单的方式。但是没有“魔法”我们就需要找到国内的源来完成下载了。强烈推荐docker镜像全站索引。我们可以提前拉取这些镜像,并替换原有的镜像。
源镜像 | 加速镜像 |
---|---|
docker.io/kubernetesui/dashboard-auth:1.1.3 | swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kubernetesui/dashboard-auth:1.1.3 |
docker.io/kubernetesui/dashboard-api:1.8.1 | swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kubernetesui/dashboard-api:1.8.1 |
docker.io/kubernetesui/dashboard-web:1.4.0 | swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kubernetesui/dashboard-web:1.4.0 |
docker.io/kubernetesui/dashboard-metrics-scraper:1.1.1 | swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kubernetesui/dashboard-metrics-scraper:1.1.1 |
docker.io/kong:3.6 | swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kong:3.6 |
docker.io/kong/kubernetes-ingress-controller:3.1 | swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kong/kubernetes-ingress-controller:3.1 |
docker.io/kong/kong-gateway:3.5 | swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kong/kong-gateway:3.5 |
使用修改后的文件来完成dashboard的安装
[root@master101 kubernetes-dashboard]# kubectl create ns kubernetes-dashboard
[root@master101 kubernetes-dashboard]# helm install --namespace kubernetes-dashboard -f values.yaml
[root@master101 kubernetes-dashboard]# kubectl get po -n kubernetes-dashboard -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kubernetes-dashboard-api-bd9457bf7-dlhr5 1/1 Running 0 6m14s 10.240.2.2 node103 <none> <none>
kubernetes-dashboard-auth-648dcc9669-65sk5 1/1 Running 0 6m14s 10.240.2.3 node103 <none> <none>
kubernetes-dashboard-kong-7759c9948d-4z9xs 1/1 Running 0 6m14s 10.240.1.4 node102 <none> <none>
kubernetes-dashboard-metrics-scraper-6dbd8fcbd8-n247j 1/1 Running 0 6m14s 10.240.2.5 node103 <none> <none>
kubernetes-dashboard-web-6d48c778dc-97nvt 1/1 Running 0 6m14s 10.240.2.4 node103 <none> <none>
dashboad安装成功,查看下Service,看下暴露的端口,访问试下
[root@node102 ~]# kubectl get svc kubernetes-dashboard-kong-proxy -o wide -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
app.kubernetes.io/component=app,app.kubernetes.io/instance=kubernetes-dashboard,app.kubernetes.io/name=kong
kubernetes-dashboard-kong-proxy ClusterIP 10.140.22.231 <none> 443/TCP 16m
发现kubernetes-dashboard-kong-proxy使用的事ClusterIP,修改为NodePort。
[root@node102 ~]# kubectl get svc kubernetes-dashboard-kong-proxy -o wide -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes-dashboard-kong-proxy NodePort 10.140.22.231 <none> 443:32733/TCP 23m app.kubernetes.io/component=app,app.kubernetes.io/instance=kubernetes-dashboard,app.kubernetes.io/name=kong
现在可以通过集群中的任意一个节点IP来完成访问,比如使用node103来访问:https://192.168.85.103:32733
创建admin用户
[root@master101 k8s]# cat >>kubernetes-dashboard-admin-user.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
EOF
[root@master101 k8s]# kubectl apply -f kubernetes-dashboard-admin-user.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
[root@master101 k8s]# kubectl -n kubernetes-dashboard create token admin-user
eyJhbGciOiJSUzI1NiIsImtpZCI6IlBRVHR0ZTNqUmRGMHc5bXRRWklUZVVTY1AzZXlwMDIwdk5MRlc0eUFoSVkifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzQyMDI5OTU2LCJpYXQiOjE3NDIwMjYzNTYsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiNWI1ZDJlZDUtMGMwNC00NGQ4LThkMmMtMTYzNzE2YzUxNTBjIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiNTllZDJlMWMtOTU2ZS00YzFjLWE1ZWUtZDE5OTcxNmJkMjM0In19LCJuYmYiOjE3NDIwMjYzNTYsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.JxztdFpDEsemfSScb24fv26tiEY83-vT8WIwEem040YFCWJlkzsWlATBLm9796GBSAR92B8tTTbnTT80mivl5yzyb-RqQ9z8k3Rz8cDEjEBiTx_VPVhQO-VzTLI70qykbgL6tOCeSO-8-skpVmnaW6jZdx61LhmpQoRQ5U7Gj5iyO277102CljtPUUuDCvLwkVg1UBKRwkoCj8pl7TEHi2Jmxn6bx0lWOEH8cFQbNJ9OjFHSs_ZZ4L4EhcHbS-AmqgiW7N3Tn-aQZUoezWNg590dWmPCcAGAM5USGQCxpYVJB2DOrM3wpd4JtXowBkRrPP5TyyZKwNezvlDFtx1ItA
此时dashboard也可以正常访问了。实际上这种admin账户的token应当是需要严格管控的,毕竟使用这个账户是可以在dashboard上面来完成各种对象资源的增删改查的,在生产环境中我们通常会创建一个只读的token并分发给其他人员来查看。
安装Metric-Server
[root@master101 metric-server]# kubectl top nodes
error: Metrics API not available
[root@master101 metric-server]# kubectl top nodes
error: Metrics API not available
Metrics-Server是k8s集群采集监控数据的聚合器,如采集node、pod的cpu、内存等数据,从 Kubernetes1.8 开始默认使用Metrics-Server采集数据,并通过Metrics API的形式提供查询,但是,kubeadm安装的k8s集群默认是没有安装Metrics-Server的,所以我们来安装一下Metrics-Server。
先去Metric-Server官网看下K8s版本与Metric-Server之间的版本兼容。Metric-Server目前最新的版本是支持Kubernetes1.19+的版本,所以直接安装最新的稳定版本即可。我们先拉取到资源文件,然后修改镜像地址,不然还是被墙。
[root@master101 metric-server]# cat components.yaml | grep image
image: registry.k8s.io/metrics-server/metrics-server:v0.7.2
imagePullPolicy: IfNotPresent
[root@master101 metric-server]# vim components.yaml
[root@master101 metric-server]# cat components.yaml | grep image
image: swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/metrics-server/metrics-server:v0.7.2
imagePullPolicy: IfNotPresent
在修改两个参数
[root@master101 metric-server]# cat components.yaml | grep args -A 10
- args:
- --cert-dir=/tmp
- --secure-port=10250
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolution=15s
image: swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/metrics-server/metrics-server:v0.7.2
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
[root@master101 metric-server]# vim components.yaml
[root@master101 metric-server]# cat components.yaml | grep args -A 10
- args:
- --cert-dir=/tmp
- --secure-port=10250
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
- --kubelet-use-node-status-port
- --metric-resolution=15s
image: swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/metrics-server/metrics-server:v0.7.2
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
- kubelet-insecure-tls , 在args 里加上这个参数,不验证kubelet提供的https证书,用于跳过 TLS 证书验证
- kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP , 在args 里加上这个参数,指定 Metrics Server 在与 Kubelet 通信时的地址类型优先级。Metrics Server 会按照指定的顺序尝试使用不同的地址类型来连接 Kubelet。
- InternalIP: Kubelet 的内部 IP 地址。
- Hostname: Kubelet 的主机名。
- InternalDNS: Kubelet 的内部 DNS 名称。
- ExternalDNS: Kubelet 的外部 DNS 名称。
- ExternalIP: Kubelet 的外部 IP 地址。
[root@master101 metric-server]# kubectl apply -f components.yaml
[root@master101 metric-server]# kubectl top nodes
[root@master101 metric-server]# kubectl top pod
安装完Metric-Server之后,在Dashboard中也可以看到集群中每个对象使用资源的情况了。
Part2结束语
到此为止,整个集群的搭建就算是完成了。我们就可以在这个集群中开始学习K8S的各种特性了。