当前位置: 首页 > article >正文

Kubernetes 1.29升级至1.31版本笔记

一、概述

  • 之前安装了kubernetes 1.29(参见👉:使用kubeadm安装Kubernetes1.29)。
    • 本次计划将该集群升级为👉1.31版本。
  • Kubernetes版本表示方式:
    • Kubernetes版本表示为x.y.z,其中x是主版本,y是次版本,z是补丁版本
  • 建议升级顺序
    • 在Kubernetes中,推荐的升级方法是逐步升级,即一次升级一个次版本
  • 本次集群升级策略
    • 1.29升级至→1.30,然后1.30升级至→1.31。

二、当前版本信息(👉v1.29.5)

root@k8s-1:~# kubectl get nodes
NAME    STATUS   ROLES           AGE    VERSION
k8s-1   Ready    control-plane   283d   v1.29.5
k8s-2   Ready    <none>          282d   v1.29.5
k8s-3   Ready    <none>          282d   v1.29.5

root@k8s-1:~# kubectl get pod -o wide -A
NAMESPACE     NAME                                      READY   STATUS    RESTARTS         AGE    IP               NODE    NOMINATED NODE   READINESS GATES
kube-system   calico-kube-controllers-8d76c5f9b-gfgwr   1/1     Running   10 (5h59m ago)   282d   10.244.231.224   k8s-1   <none>           <none>
kube-system   calico-node-d822d                         1/1     Running   9 (5h59m ago)    282d   10.0.20.135      k8s-3   <none>           <none>
kube-system   calico-node-f26t9                         1/1     Running   10 (5h59m ago)   282d   10.0.20.133      k8s-1   <none>           <none>
kube-system   calico-node-rchkb                         1/1     Running   10 (5h59m ago)   282d   10.0.20.134      k8s-2   <none>           <none>
kube-system   coredns-76f75df574-qmz2n                  1/1     Running   10 (5h59m ago)   283d   10.244.231.225   k8s-1   <none>           <none>
kube-system   coredns-76f75df574-r7gd2                  1/1     Running   10 (5h59m ago)   283d   10.244.231.223   k8s-1   <none>           <none>
kube-system   etcd-k8s-1                                1/1     Running   11 (5h59m ago)   283d   10.0.20.133      k8s-1   <none>           <none>
kube-system   kube-apiserver-k8s-1                      1/1     Running   11 (5h59m ago)   283d   10.0.20.133      k8s-1   <none>           <none>
kube-system   kube-controller-manager-k8s-1             1/1     Running   11 (5h59m ago)   283d   10.0.20.133      k8s-1   <none>           <none>
kube-system   kube-proxy-5p7w2                          1/1     Running   9 (5h59m ago)    282d   10.0.20.135      k8s-3   <none>           <none>
kube-system   kube-proxy-86slq                          1/1     Running   11 (5h59m ago)   283d   10.0.20.133      k8s-1   <none>           <none>
kube-system   kube-proxy-n42s4                          1/1     Running   10 (5h59m ago)   282d   10.0.20.134      k8s-2   <none>           <none>
kube-system   kube-scheduler-k8s-1                      1/1     Running   11 (5h59m ago)   283d   10.0.20.133      k8s-1   <none>           <none>

三、1.29升级至→1.30步骤

参考文档:

Upgrading kubeadm clusters | KubernetesThis page explains how to upgrade a Kubernetes cluster created with kubeadm from version 1.29.x to version 1.30.x, and from version 1.30.x to 1.30.y (where y > x). Skipping MINOR versions when upgrading is unsupported. For more details, please visit Version Skew Policy.To see information about upgrading clusters created using older versions of kubeadm, please refer to following pages instead:Upgrading a kubeadm cluster from 1.28 to 1.29 Upgrading a kubeadm cluster from 1.https://v1-30.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/

3.1、Changing the package repository

在Kubernetes中,当升级到新的次版本时,需要更新包仓库(package repository)以获取新版本的软件包,注意需要更新Kubernetes包仓库的GPG密钥

root@k8s-1:~# apt-get install -y apt-transport-https ca-certificates curl gpg
root@k8s-1:~# curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
File '/etc/apt/keyrings/kubernetes-apt-keyring.gpg' exists. Overwrite? (y/N) y

root@k8s-1:~# cat /etc/apt/sources.list.d/kubernetes.list 
deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /  注:将原来的数字1.29修改为1.30

3.2、Determine which version to upgrade to

确定需要升级的版本,这里选择1.30.10-1.1

root@k8s-1:~# apt update
root@k8s-1:~# apt-cache madison kubeadm
   kubeadm | 1.30.10-1.1 | https://pkgs.k8s.io/core:/stable:/v1.30/deb  Packages
   kubeadm | 1.30.9-1.1 | https://pkgs.k8s.io/core:/stable:/v1.30/deb  Packages
   kubeadm | 1.30.8-1.1 | https://pkgs.k8s.io/core:/stable:/v1.30/deb  Packages
   kubeadm | 1.30.7-1.1 | https://pkgs.k8s.io/core:/stable:/v1.30/deb  Packages
   kubeadm | 1.30.6-1.1 | https://pkgs.k8s.io/core:/stable:/v1.30/deb  Packages
   kubeadm | 1.30.5-1.1 | https://pkgs.k8s.io/core:/stable:/v1.30/deb  Packages
   kubeadm | 1.30.4-1.1 | https://pkgs.k8s.io/core:/stable:/v1.30/deb  Packages
   kubeadm | 1.30.3-1.1 | https://pkgs.k8s.io/core:/stable:/v1.30/deb  Packages
   kubeadm | 1.30.2-1.1 | https://pkgs.k8s.io/core:/stable:/v1.30/deb  Packages
   kubeadm | 1.30.1-1.1 | https://pkgs.k8s.io/core:/stable:/v1.30/deb  Packages
   kubeadm | 1.30.0-1.1 | https://pkgs.k8s.io/core:/stable:/v1.30/deb  Packages
root@k8s-1:~#

3.3、Upgrading control plane nodes

首先升级控制平面节点。

检查是否存在/etc/kubernetes/admin.conf文件。

  • /etc/kubernetes/admin.conf 文件是由 kubeadm 创建的,包含集群的 kubeconfig 配置,通常用于管理员访问 Kubernetes 集群。

  • 该文件应谨慎使用,因为它赋予了对集群的超级用户权限。

如果你需要进一步操作,比如复制该文件到其他位置或修改权限,请确保你有足够的权限(通常需要 root 权限)。

root@k8s-1:~# ls /etc/kubernetes/admin.conf 
/etc/kubernetes/admin.conf
root@k8s-1:~# cat /etc/kubernetes/admin.conf 

3.3.1 Call "kubeadm upgrade"

使用 kubeadm upgrade 升级Kubernetes集群。本次升级的集群只有一个控制平面节点。

1、Upgrade kubeadm:

执行:

apt-mark unhold kubeadm && \
apt-get update && sudo apt-get install -y kubeadm='1.30.10-1.1' && \
apt-mark hold kubeadm

root@k8s-1:~# apt-mark unhold kubeadm && \
> apt-get update && sudo apt-get install -y kubeadm='1.30.10-1.1' && \
> apt-mark hold kubeadm
Canceled hold on kubeadm.
Get:1 http://security.ubuntu.com/ubuntu jammy-security InRelease [129 kB]
Hit:2 http://us.archive.ubuntu.com/ubuntu jammy InRelease
Get:3 http://us.archive.ubuntu.com/ubuntu jammy-updates InRelease [128 kB]                                                                           
Hit:5 https://download.docker.com/linux/ubuntu jammy InRelease                                                                                                 
Hit:6 http://us.archive.ubuntu.com/ubuntu jammy-backports InRelease                       
Hit:4 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.30/deb  InRelease
Get:7 http://us.archive.ubuntu.com/ubuntu jammy-updates/main amd64 Packages [2,338 kB]
Get:8 http://us.archive.ubuntu.com/ubuntu jammy-updates/universe amd64 Packages [1,187 kB]
Fetched 3,783 kB in 6s (668 kB/s)                          
Reading package lists... Done
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following packages will be upgraded:
  kubeadm
1 upgraded, 0 newly installed, 0 to remove and 66 not upgraded.
Need to get 10.4 MB of archives.
After this operation, 8,192 B of additional disk space will be used.
Get:1 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.30/deb  kubeadm 1.30.10-1.1 [10.4 MB]
Fetched 10.4 MB in 9s (1,209 kB/s)                                                                                                                                                                                                                             
(Reading database ... 110376 files and directories currently installed.)
Preparing to unpack .../kubeadm_1.30.10-1.1_amd64.deb ...
Unpacking kubeadm (1.30.10-1.1) over (1.30.1-1.1) ...
Setting up kubeadm (1.30.10-1.1) ...
Scanning processes...                                                                                                                                                                                                                                           
Scanning linux images...                                                                                                                                                                                                                                        

Running kernel seems to be up-to-date.

No services need to be restarted.

No containers need to be restarted.

No user sessions are running outdated binaries.

No VM guests are running outdated hypervisor (qemu) binaries on this host.
kubeadm set on hold.
root@k8s-1:~# 
root@k8s-1:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"30", GitVersion:"v1.30.10", GitCommit:"ccc69071da5040a2bafc1ba9c4775782e0f4e55c", GitTreeState:"clean", BuildDate:"2025-02-12T21:32:03Z", GoVersion:"go1.22.12", Compiler:"gc", Platform:"linux/amd64"}
root@k8s-1:~# 

2、检查kubeadm版本

root@k8s-1:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"30", GitVersion:"v1.30.10", GitCommit:"ccc69071da5040a2bafc1ba9c4775782e0f4e55c", GitTreeState:"clean", BuildDate:"2025-02-12T21:32:03Z", GoVersion:"go1.22.12", Compiler:"gc", Platform:"linux/amd64"}

3、规划升级

执行:kubeadm upgrade plan,会显示你可以升级到的版本。

注意这里的升级规划提供了两个版本可以升级,一个是1.29.14,另一个是1.30.10。这里选择升级到1.30.10。

root@k8s-1:~# kubeadm upgrade plan
[preflight] Running pre-flight checks.
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: 1.29.5
[upgrade/versions] kubeadm version: v1.30.10
I0223 08:41:33.571311  140650 version.go:256] remote version is much newer: v1.32.2; falling back to: stable-1.30
[upgrade/versions] Target version: v1.30.10
[upgrade/versions] Latest version in the v1.29 series: v1.29.14

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   NODE      CURRENT   TARGET
kubelet     k8s-1     v1.29.5   v1.29.14
kubelet     k8s-2     v1.29.5   v1.29.14
kubelet     k8s-3     v1.29.5   v1.29.14

Upgrade to the latest version in the v1.29 series:

COMPONENT                 NODE      CURRENT    TARGET
kube-apiserver            k8s-1     v1.29.5    v1.29.14
kube-controller-manager   k8s-1     v1.29.5    v1.29.14
kube-scheduler            k8s-1     v1.29.5    v1.29.14
kube-proxy                          1.29.5     v1.29.14
CoreDNS                             v1.11.1    v1.11.1
etcd                      k8s-1     3.5.12-0   3.5.12-0

You can now apply the upgrade by executing the following command:

        kubeadm upgrade apply v1.29.14

_____________________________________________________________________

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   NODE      CURRENT   TARGET
kubelet     k8s-1     v1.29.5   v1.30.10
kubelet     k8s-2     v1.29.5   v1.30.10
kubelet     k8s-3     v1.29.5   v1.30.10

Upgrade to the latest stable version:

COMPONENT                 NODE      CURRENT    TARGET
kube-apiserver            k8s-1     v1.29.5    v1.30.10
kube-controller-manager   k8s-1     v1.29.5    v1.30.10
kube-scheduler            k8s-1     v1.29.5    v1.30.10
kube-proxy                          1.29.5     v1.30.10
CoreDNS                             v1.11.1    v1.11.1
etcd                      k8s-1     3.5.12-0   3.5.12-0

You can now apply the upgrade by executing the following command:

        kubeadm upgrade apply v1.30.10

Note: Before you can perform this upgrade, you have to update kubeadm to v1.30.10.

_____________________________________________________________________


The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.

API GROUP                 CURRENT VERSION   PREFERRED VERSION   MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io   v1alpha1          v1alpha1            no
kubelet.config.k8s.io     v1beta1           v1beta1             no
_____________________________________________________________________

4、实施升级

执行:kubeadm upgrade apply v1.30.10进行升级。

root@k8s-1:~# kubeadm upgrade apply v1.30.10
[preflight] Running pre-flight checks.
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.30.10"
[upgrade/versions] Cluster version: v1.29.5
[upgrade/versions] kubeadm version: v1.30.10
[upgrade] Are you sure you want to proceed? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
W0223 08:58:59.968755  151886 checks.go:844] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.30.10" (timeout: 5m0s)...
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2025-02-23-09-00-47/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This can take up to 5m0s
[apiclient] Found 1 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests877707047"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2025-02-23-09-00-47/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This can take up to 5m0s
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2025-02-23-09-00-47/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This can take up to 5m0s
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2025-02-23-09-00-47/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This can take up to 5m0s
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config1449315059/config.yaml
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.30.10". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
root@k8s-1:~# kubectl get nodes
NAME    STATUS   ROLES           AGE    VERSION
k8s-1   Ready    control-plane   283d   v1.29.5
k8s-2   Ready    <none>          282d   v1.29.5
k8s-3   Ready    <none>          282d   v1.29.5
root@k8s-1:~# 

3.3.2 Drain the node

排空节点:执行kubectl drain k8s-1 将节点标记为不可调度(unschedulable),并将其上的Pod迁移到其他可用的节点上。

root@k8s-1:~# kubectl drain k8s-1 --ignore-daemonsets
node/k8s-1 cordoned
Warning: ignoring DaemonSet-managed Pods: kube-system/calico-node-f26t9, kube-system/kube-proxy-dj4z7
evicting pod kube-system/calico-kube-controllers-8d76c5f9b-gfgwr
pod/calico-kube-controllers-8d76c5f9b-gfgwr evicted
node/k8s-1 drained
root@k8s-1:~# 

3.3.3 Upgrade kubelet and kubectl

1、升级kubelet和kubectl,执行以下命令:

apt-mark unhold kubelet kubectl && \

apt-get update && sudo apt-get install -y kubelet='1.30.10-1.1' kubectl='1.30.10-1.1' && \

apt-mark hold kubelet kubectl

 2、重启kubelet,执行以下命令:

root@k8s-1:~# systemctl daemon-reload

root@k8s-1:~# systemctl restart kubelet

root@k8s-1:~# apt-mark unhold kubelet kubectl && \
> apt-get update && sudo apt-get install -y kubelet='1.30.10-1.1' kubectl='1.30.10-1.1' && \
> apt-mark hold kubelet kubectl


root@k8s-1:~# systemctl daemon-reload
root@k8s-1:~# systemctl restart kubelet

3.3.4 Uncordon the node

恢复节点:完成升级后,使用kubectl uncordon k8s-1命令恢复节点的可调度状态。

root@k8s-1:~# kubectl get nodes
NAME    STATUS                     ROLES           AGE    VERSION
k8s-1   Ready,SchedulingDisabled   control-plane   283d   v1.30.10
k8s-2   Ready                      <none>          282d   v1.29.5
k8s-3   Ready                      <none>          282d   v1.29.5
root@k8s-1:~# kubectl uncordon k8s-1
node/k8s-1 uncordoned
root@k8s-1:~# kubectl get nodes     
NAME    STATUS   ROLES           AGE    VERSION
k8s-1   Ready    control-plane   283d   v1.30.10
k8s-2   Ready    <none>          282d   v1.29.5
k8s-3   Ready    <none>          282d   v1.29.5
root@k8s-1:~#

3.4 Upgrade worker nodes

本次kubernetes有两个工作节点k8s-2和k8s-3,按以下步骤分别升级这两个工作节点。

这里选择升级Linux nodes的文档进行升级。

3.4.1 Changing the package repository

更新worker nodes的包仓库。

root@k8s-2:~# apt-get install -y apt-transport-https ca-certificates curl gpg
root@k8s-2:~# curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
File '/etc/apt/keyrings/kubernetes-apt-keyring.gpg' exists. Overwrite? (y/N) y

root@k8s-2:~# cat /etc/apt/sources.list.d/kubernetes.list 
deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /  注:将原来的数字1.29修改为1.30

3.4.2 Upgrade kubeadm

升级工作节点的kubeadm版本:

apt-mark unhold kubeadm && \
apt-get update && sudo apt-get install -y kubeadm='1.30.10-1.1' && \
apt-mark hold kubeadm

root@k8s-2:~# apt-mark unhold kubeadm && \
> apt-get update && sudo apt-get install -y kubeadm='1.30.10-1.1' && \
> apt-mark hold kubeadm

root@k8s-2:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"30", GitVersion:"v1.30.10", GitCommit:"ccc69071da5040a2bafc1ba9c4775782e0f4e55c", GitTreeState:"clean", BuildDate:"2025-02-12T21:32:03Z", GoVersion:"go1.22.12", Compiler:"gc", Platform:"linux/amd64"}
root@k8s-2:~#

3.4.3 Call "kubeadm upgrade"

升级工作节点的 kubelet 配置:

kubeadm upgrade node

root@k8s-2:~# kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks
[preflight] Skipping prepull. Not a control plane node.
[upgrade] Skipping phase. Not a control plane node.
[upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config1960286886/config.yaml
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
root@k8s-2:~# 

 3.4.4 Drain the node

如果需要在工作节点执行kubectl命令,需要将控制节点的admin.conf拷贝过来。

解决worker node使用kubectl 的问题:
root@k8s-1:/etc/kubernetes# scp admin.conf root@10.0.20.134:~
root@10.0.20.134's password: 
admin.conf                                                                                                                                                                                                                    100% 5651     5.2MB/s   00:00    
root@k8s-1:/etc/kubernetes#

root@k8s-2:~# pwd
/root
root@k8s-2:~# ls
admin.conf  snap

这样可以在工作节点正常执行kubectl命令:
root@k8s-2:~# kubectl --kubeconfig=/root/admin.conf get nodes
NAME    STATUS   ROLES           AGE    VERSION
k8s-1   Ready    control-plane   283d   v1.30.10
k8s-2   Ready    <none>          282d   v1.29.5
k8s-3   Ready    <none>          282d   v1.29.5

排空工作节点:

root@k8s-2:~# kubectl --kubeconfig=/root/admin.conf drain k8s-2 --ignore-daemonsets
node/k8s-2 cordoned
Warning: ignoring DaemonSet-managed Pods: kube-system/calico-node-rchkb, kube-system/kube-proxy-2shtt
evicting pod kube-system/coredns-55cb58b774-s42bq
pod/coredns-55cb58b774-s42bq evicted
node/k8s-2 drained
root@k8s-2:~# 

3.4.5 Upgrade kubelet and kubectl

1、升级kubelet和kubectl,执行以下命令:

apt-mark unhold kubelet kubectl && \

apt-get update && sudo apt-get install -y kubelet='1.30.10-1.1' kubectl='1.30.10-1.1' && \

apt-mark hold kubelet kubectl

 2、重启kubelet,执行以下命令:

root@k8s-1:~# systemctl daemon-reload

root@k8s-1:~# systemctl restart kubelet

root@k8s-2:~# apt-mark unhold kubelet kubectl && \
> apt-get update && sudo apt-get install -y kubelet='1.30.10-1.1' kubectl='1.30.10-1.1' && \
> apt-mark hold kubelet kubectl

root@k8s-2:~# systemctl daemon-reload
root@k8s-2:~# systemctl restart kubelet

3.4.6 Uncordon the node

 恢复节点:完成升级后,使用kubectl uncordon k8s-2命令恢复节点的可调度状态。

root@k8s-2:~#  kubectl --kubeconfig=/root/admin.conf uncordon k8s-2
node/k8s-2 uncordoned

root@k8s-2:~# kubectl --kubeconfig=/root/admin.conf get nodes
NAME    STATUS   ROLES           AGE    VERSION
k8s-1   Ready    control-plane   283d   v1.30.10
k8s-2   Ready    <none>          282d   v1.30.10  <--工作节点k8s-2完成升级为1.30.10
k8s-3   Ready    <none>          282d   v1.29.5
root@k8s-2:~# 

四、1.30升级至→1.31步骤 

 参考文档:

Upgrading kubeadm clusters | KubernetesThis page explains how to upgrade a Kubernetes cluster created with kubeadm from version 1.30.x to version 1.31.x, and from version 1.31.x to 1.31.y (where y > x). Skipping MINOR versions when upgrading is unsupported. For more details, please visit Version Skew Policy.To see information about upgrading clusters created using older versions of kubeadm, please refer to following pages instead:Upgrading a kubeadm cluster from 1.29 to 1.30 Upgrading a kubeadm cluster from 1.https://v1-31.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/升级步骤和1.29升级至→1.30步骤完全相同,只是需要修改包仓库(package epository)版本为1.31。通过查看1.31的可选软件,这里确定升级为1.31.6-1.1版本。

root@k8s-1:~# vi /etc/apt/sources.list.d/kubernetes.list 
deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /

root@k8s-1:~# apt update
root@k8s-1:~# apt-cache madison kubeadm
   kubeadm | 1.31.6-1.1 | https://pkgs.k8s.io/core:/stable:/v1.31/deb  Packages
   kubeadm | 1.31.5-1.1 | https://pkgs.k8s.io/core:/stable:/v1.31/deb  Packages
   kubeadm | 1.31.4-1.1 | https://pkgs.k8s.io/core:/stable:/v1.31/deb  Packages
   kubeadm | 1.31.3-1.1 | https://pkgs.k8s.io/core:/stable:/v1.31/deb  Packages
   kubeadm | 1.31.2-1.1 | https://pkgs.k8s.io/core:/stable:/v1.31/deb  Packages
   kubeadm | 1.31.1-1.1 | https://pkgs.k8s.io/core:/stable:/v1.31/deb  Packages
   kubeadm | 1.31.0-1.1 | https://pkgs.k8s.io/core:/stable:/v1.31/deb  Packages
root@k8s-1:~# 

五、Verify the status of the cluster

所有节点完成后,检查各节点状态为Ready,版本更新为👉1.31.6,完成升级。

root@k8s-1:~# kubectl get nodes
NAME    STATUS   ROLES           AGE    VERSION
k8s-1   Ready    control-plane   284d   v1.31.6
k8s-2   Ready    <none>          284d   v1.31.6
k8s-3   Ready    <none>          284d   v1.31.6
root@k8s-1:~# 


http://www.kler.cn/a/562115.html

相关文章:

  • 【Linux】Ubuntu服务器的安装和配置管理
  • pytest下放pytest.ini文件就导致报错:ERROR: file or directory not found: #
  • 开发一个交易所需要哪些技术?
  • C/C++语言知识点二
  • HWUI 和 Skia
  • 要实现将JSON格式数据动态生成思维导图图片,可以使用`pygraphviz`库结合`json`解析。以下是完整实现代码:
  • 第二十:【路由的props配置】
  • Web自动化之Selenium下Chrome与Edge的Webdriver常用Options参数
  • 关于命令行下的 git( git add、git commit、git push)
  • libxls库的编译以及基于Visual studio的配置
  • 使用串口工具实现tcp与udp收发
  • 【嵌入式】STM32内部NOR Flash磨损平衡与掉电保护总结
  • Maven 依赖管理基础(二)
  • cs106x-lecture13(Autumn 2017)-SPL实现
  • springcloud和dubbo的区别
  • 读论文:Generation of 3D molecules in pockets via a language model (Lingo3Dmol)
  • 前端页面什么是全屏嵌入/什么是局部嵌入
  • Pytorch实现之混合成员GAN训练自己的数据集
  • 事务管理-01.事务回顾-spring事务管理
  • 在职考研可行性探索