k8s-node2 NotReady 节点NotReady如何解决?
从集群中移除 k8s-node2
[root@k8s-master ~]# kubectl delete node k8s-node2
node "k8s-node2" deleted
重置 k8s-node2
登录到 k8s-node2
上,使用以下命令重置该节点,使其退出集群并恢复到初始状态:
[root@k8s-node2 ~]# sudo kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W1227 04:47:55.563584 1677 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
重新加入节点
在 k8s-master
上运行以下命令获取重新加入集群所需的 kubeadm join
命令:
[root@k8s-master ~]# kubeadm token create --print-join-command
kubeadm join 192.168.58.231:6443 --token dnht7t.ym4jms2ctru89j2z --discovery-token-ca-cert-hash sha256:82bc8471036711f1c3d81b733082935177e773396e8bb9a5d15f2a0bf95b137e
检查节点状态
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 44h v1.21.10
k8s-node1 Ready <none> 44h v1.21.10
k8s-node2 Ready <none> 108s v1.21.10
[root@k8s-master ~]# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-697d846cf4-79hpj 1/1 Running 1 44h
kube-system calico-node-58ss2 1/1 Running 1 44h
kube-system calico-node-gc547 1/1 Running 1 44h
kube-system calico-node-hdhxf 1/1 Running 1 44h
kube-system coredns-6f6b8cc4f6-5nbb6 1/1 Running 1 44h
kube-system coredns-6f6b8cc4f6-q9rhc 1/1 Running 1 44h
kube-system etcd-k8s-master 1/1 Running 1 44h
kube-system kube-apiserver-k8s-master 1/1 Running 1 44h
kube-system kube-controller-manager-k8s-master 1/1 Running 1 44h
kube-system kube-proxy-7hp6l 1/1 Running 1 44h
kube-system kube-proxy-ddhnb 1/1 Running 1 44h
kube-system kube-proxy-dwcgd 1/1 Running 1 44h
kube-system kube-scheduler-k8s-master 1/1 Running 1 44h