k8s 1.28 二进制安装与部署
第一步 :配置Linux服务器
#借助梯子工具
192.168.196.100
1C8G
kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubectl、haproxy、keepalived
192.168.196.101
1C8G
kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubectl、haproxy、keepalived
192.168.196.102
1C8G
kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubectl
#说明
haproxy与keepalived 主要提供k8s master 的kube-apiserver 组件的高可用实现。
kube-controller-manager Kubernetes 自带的控制器例子包括副本控制器、节点控制器、命名空间控制器和服务账号控制器等。
它是一个永不休止的控制回路组件,其负责控制集群资源的状态。
通过监控 kube-apiserver 的资源状态,比较当前资源状态和期望状态,如果不一致,更新 kube-apiserver 的资源状态以保持当前资源状态和期望状态一致。
kube-scheduler 主要负责Pod的操作。
kube-apiserver 核心中间工具,接收所有组件包括客户端的处理信息。
kubectl 它是Kubernetes命令行工具。
3.1 服务器主机名配置
hostnamectl set-hostname ma01
hostnamectl set-hostname ma02
hostnamectl set-hostname ma03
3.2 服务器/etc/hosts解析
cat >> /etc/hosts << EOF
192.168.196.100 ma01
192.168.196.101 ma02
192.168.196.102 ma03
EOF
3.3 安全配置
systemctl stop firewalld
systemctl disable firewalld
setenfoce 0
sed -ri 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
3.4 禁用swap 分区
swapoff -a
删除或禁用 /etc/fstab中的swap 分区
echo "vm.swappiness=0" >> /etc/sysctl.conf
3.5 配置主机时间同步
ntpdate time.windows.com
3.6 ipvs 管理工具安装及模块加载
yum -y install ipvsadm ipset sysstat conntrack libseccomp
cat > /etc/sysconfig/modules/ipvs.modules << EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
#授权,运行,检查是否加载
chmod 755 /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules
lsmod | grep -e ip_vs -e nf_conntrack
3.7 Linux 内核升级
yum -y install perl
#导入elrepo gpg key
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
#安装elrepo yum源仓库
yum -y install https://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm
#安装kernel-lt 版本,ml为最新稳定版本,lt为长期维护版本
yum --enablerepo="elrepo-kernel" -y install kernel-lt.x86_64
#设置grub2默认引导为0,开机启动
grub2-set-default 0
#重新生成grub2引导文件
grub2-mkconfig -o /boot/grub2/grub.cfg
#所有节点配置完内核后,重启服务器,使升级的内核生效
reboot -h now
#验证是否升级成功
uname -r
3.8 Linux 内核优化
#添加网桥过滤及内核转发配置文件
cat > /etc/sysctl.d/k8s.conf <<EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sysctl -p
#加载br_netfilter模块
#临时加载模块
modprobe br_netfilter
#查看
sysctl -p /etc/sysctl.d/k8s.conf
#永久性加载模块
cat > /etc/modules-load.d/containerd.conf << EOF
overlay
br_netfilter
EOF
#设置为开机启动
systemctl enable --now systemd-modules-load.service
#查看是否加载
lsmod | grep br_netfilter
3.9 服务器免密
ssh-keygen
ssh-copy-id root@ip
#服务器之间及自己之间进行验证。
ssh ma01
ssh ma02
ssh ma03
第二步 安装部署HAproxy、Keepalived
[haproxy 与 keepalived]
IP:192.168.196.100
IP:192.168.196.101
VIP:192.168.196.200
2.1 使用yum安装haproxy、keepalived
yum -y install haproxy keepalived
#HAproxy 是开源的TCP和HTTP负载均衡器、反向代理工具。内置了监控和统计功能,可以实时获取服务器的状态和指标。
#HAproxy 适用于负载特大的web站点,这些站点
注意:在部署k8s 过程中,因为资源问题出现了几次HAproxy VIP自动切换的问题,经过观察完全不影响K8s其它组件的运行。
2.2 HAProxy 配置
cat >/etc/haproxy/haproxy.cfg <<"EOF"
# 全局配置
global
maxconn 2000
ulimit-n 16384
log 127.0.0.1 local0 err
stats timeout 30s
# 默认配置
defaults
log global
mode http
option httplog
timeout connect 5000
timeout client 50000
timeout server 50000
timeout http-request 15s
timeout http-keep-alive 15s
# 监控uri 前端访问的url 地址: IP:33305/monitor
frontend monitor-in
bind *:33305
mode http
option httplog
monitor-uri /monitor
# 前端服务访问的TCP/IP 方式,default_backend 标记的是负载均衡配置的 名称
frontend k8s-master
bind 0.0.0.0:6443
bind 127.0.0.1:6443
mode tcp
option tcplog
tcp-request inspect-delay 5s
default_backend k8s-master
# 该项续集 上面的default_backend k8s-master
backend k8s-master
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server ma01 192.168.196.100:9443 check
server ma02 192.168.196.101:9443 check
server ma03 192.168.196.102:9443 check
EOF
参数详解:
log global 启用全局日志记录,HAProxy 将记录其运行状态和请求信息。
mode http 将HAPtoxy 设置为HTTP模式,这意味着它将处理HTTP请求和响应
timeout connect 5000 设置连接到后端服务器的超时时间为5000毫秒,如果在此时间内无法建立连接,HAProxy将放弃该连接。
timeout client 50000 设置与客户端之间的超时时间为50000毫秒,如果在此时间内没有数据交换,连接将被关闭。
timeout server 50000 设置与后端服务器之间的超时时间为50000毫秒,,同样的,如果在此时间内没有数据交换,连接将被关闭。
timeout http-request 15s 设置HTTP请求的处理超时时间为15秒,如果在此时间内没有完成请求的处理,HAProxy将终止该请求。
timeout http-keep-alive 5s 设置HTTP Keep-Alive 连接的超时时间为15秒,Keep-Alive 连接允许在同一TCP上进行多个HTTP请求和响应,这个参数控制该连接的保持时间。
option tcplog 选项启用tcp日志记录,它运行HAProxy 记录来自TCP客户端和后端服务器的连接信息,便于后续的监控和故障排查。
tcp-request inspect-delay 5s 这个指令设置在处理TCP请求时,HAProxy要等待的时间。用于确保完整的请求头或数据能够被接收到,尤其是在使用TCP的情况下。
inter 10s 服务器的健康检查间隔为10秒,在每次检查之间,负载均衡器会等待10秒。
downinter 5s 服务器被标记为DOWN 时,负责均衡器将每5秒进行一次检查,以确定服务器是否可以恢复。
rise 2 这个参数指定在服务器被标记为DOWN之后,需要连续成功的健康检查数量(2次),成功则其状态恢复为up。
fall 2 这个参数指定服务器被标记为UP之后,需要连续失败的健康检查数量(2次),才能将其状态标记为DOWN。
slowstart 60 这个参数设置慢启动时间为60秒,在这个时间段内,服务器的最大连接数(maxconn)将逐渐增加,以避免服务器在重新上线时收到过多的流量冲击。
maxconn 250 这个参数设置每个后端服务器的最大连接数为250。
maxqueue 256 这个参数设置连接队列的最大长度为256,当所有连接都达到最大连接数时,新连接将被放入队列,直到有可用连接。
weight 100 这个参数设置服务器的权重为100,权重用于负载均衡算法中,权重越高,分配到的请求越多。
2.3 keepalived 配置
cat > /etc/keepalived/keepalived.conf <<"EOF"
Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
script_user root
enable_script_security
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state MASTER
interface ens33
mcast_src_ip 192.168.196.100
virtual_router_id 51
priority 100
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.196.200
}
track_script {
chk_apiserver
}
}
EOF
#ma02 服务器配置
cat > /etc/keepalived/keepalived.conf <<"EOF"
Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
script_user root
enable_script_security
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state MASTER
interface ens33
mcast_src_ip 192.168.196.101
virtual_router_id 51
priority 100
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.196.200
}
track_script {
chk_apiserver
}
}
EOF
3.4 HAProxy的健康检查脚本
#ma01 与 ma02
cat > /etc/keepalived/check_apiserver.sh <<"EOF"
#!/bin/bash
err=0
for k in $(seq 1 3)
do
check_code=$(pgrep haproxy)
if [[ $check_code == "" ]]; then
err=$(expr $err + 1)
sleep 1
continue
else
err=0
break
fi
done
if [[ $err != "0" ]]; then
echo "systemctl stop keepalived"
/usr/bin/systemctl stop keepalived
exit 1
else
exit 0
fi
EOF
chmod +x /etc/keepalived/check_apiserver.sh
#启动服务并验证
systemctl daemon-reload
systemctl enable --now haproxy
systemctl enable --now keepalived
#ip a s
ens 33网卡上存在192.168.196.100的IP地址与192.168.196.200的IP地址。
第三步 部署ETCD 数据库
IP:192.168.196.100
IP:192.168.196.101
IP:192.168.196.102
3.1 安装cfssl 密钥生成工具
#创建工作目录
mkdir -p /data/k8s-work
安装cfssl 密钥生成工具,并生成CA 证书
https://github.com/cloudflare/cfssl/releases
# 1、下载cfssl、cfssljson、cfssl-certinfo
# cfssl:用于签发证书
# cfssljson:将cfssl签发生成的证书(json格式)变成文件承载式文件
# cfssl-certinfo:验证查看证书信息
# Kubernetes 官网:https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/certificates/
# 版本
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl_1.5.0_linux_amd64 -o cfssl
chmod +x cfssl
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssljson_1.5.0_linux_amd64 -o cfssljson
chmod +x cfssljson
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl-certinfo_1.5.0_linux_amd64 -o cfssl-certinfo
chmod +x cfssl-certinfo
cp cfssl /usr/local/bin/
cp cfssljson /usr/local/bin/
cp cfssl-certinfo /usr/local/bin/
# 2、给cfssl、cfssljson、cfssl-certinfo添加可执行权限
chmod +x /usr/local/bin/cfssl*
cfssl version
3.2 配置证书颁发机构CA的json文件
cat > ca-csr.json <<"EOF"
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "kubemsb",
"OU": "CN"
}
],
"ca": {
"expiry": "87600h"
}
}
EOF
#创建CA证书
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
#配置CA证书策略
cat > ca-config.json <<"EOF"
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
}
}
}
}
EOF
#配置ETCD 证书的请求文件
cat > etcd-csr.json <<"EOF"
{
"CN": "etcd",
"hosts": [
"127.0.0.1",
"192.168.196.100",
"192.168.196.101",
"192.168.196.102",
"192.168.196.200"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "kubemsb",
"OU": "CN"
}]
}
EOF
#生成etcd证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
下载etcd软件包
cd /soft
wget https://github.com/etcd-io/etcd/releases/download/v3.5.11/etcd-v3.5.11-linux-amd64.tar.gz
cd etcd-v3.5.11-linux-amd64
cp etcd* /usr/local/bin
scp etcd* ma02:/usr/local/bin
scp etcd* ma03:/usr/local/bin
#创建etcd的配置文件路径(ma01,ma02,ma03都要执行)
mkdir /etc/etcd
#配置etcd 配置文件,三台服务器的配置
#ma01
cat > /etc/etcd/etcd.conf <<EOF
#[Member]
ETCD_NAME="etcd1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.196.100:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.196.100:2379,http://127.0.0.1:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.196.100:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.196.100:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.196.100:2380,etcd2=https://192.168.196.101:2380,etcd3=https://192.168.196.102:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
#ma02
cat > /etc/etcd/etcd.conf <<EOF
#[Member]
ETCD_NAME="etcd2"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.196.101:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.196.101:2379,http://127.0.0.1:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.196.101:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.196.101:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.196.100:2380,etcd2=https://192.168.196.101:2380,etcd3=https://192.168.196.102:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
#ma03
cat > /etc/etcd/etcd.conf <<EOF
#[Member]
ETCD_NAME="etcd3"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.196.102:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.196.102:2379,http://127.0.0.1:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.196.102:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.196.102:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.196.100:2380,etcd2=https://192.168.196.101:2380,etcd3=https://192.168.196.102:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
#创建服务配置文件[ma01,ma02,ma03]
mkdir -p /etc/etcd/ssl
mkdir -p /var/lib/etcd/default.etcd
#ma01本地拷贝
cp /data/k8s-work/ca_cert/ca*.pem /etc/etcd/ssl
cp /data/k8s-work/etcd_cert/etcd*.pem /etc/etcd/ssl
#ma02远程拷贝
scp /data/k8s-work/ca_cert/ca*.pem ma02:/etc/etcd/ssl
scp /data/k8s-work/etcd_cert/etcd*.pem ma02:/etc/etcd/ssl
#ma03远程拷贝
scp /data/k8s-work/ca_cert/ca*.pem ma03:/etc/etcd/ssl
scp /data/k8s-work/etcd_cert/etcd*.pem ma03:/etc/etcd/ssl
配置etcd 启动文件[all hosts]
cat > /etc/systemd/system/etcd.service <<"EOF"
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=-/etc/etcd/etcd.conf
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd \
--cert-file=/etc/etcd/ssl/etcd.pem \
--key-file=/etc/etcd/ssl/etcd-key.pem \
--trusted-ca-file=/etc/etcd/ssl/ca.pem \
--peer-cert-file=/etc/etcd/ssl/etcd.pem \
--peer-key-file=/etc/etcd/ssl/etcd-key.pem \
--peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \
--peer-client-cert-auth \
--client-cert-auth
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
#启动etcd集群
systemctl daemon-reload
systemctl enable --now etcd.service
systemctl status etcd
第四步 kubernetes 集群部署
cd /soft
wget https://dl.k8s.io/v1.28.0/kubernetes-server-linux-amd64.tar.gz
tar -xvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin/
cp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
scp kube-apiserver kube-controller-manager kube-scheduler kubectl ma02:/usr/local/bin/
scp kube-apiserver kube-controller-manager kube-scheduler kubectl ma03:/usr/local/bin/
#在集群节点上创建目录[ma01,ma02,ma03]
mkdir -p /etc/kubernetes/ssl
mkdir -p /var/log/kubernetes
4.1 部署api-server
#配置apiserver 证书请求文件
cd /data/k8s-work/kube-apiserver
cat >kube-apiserver-csr.json <<EOF
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"192.168.196.100",
"192.168.196.101",
"192.168.196.102",
"192.168.196.200",
"10.96.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "kubemsb",
"OU": "CN"
}
]
}
EOF
#生成apiserver证书及token文件
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver
#生成token.csv
cat > token.csv << EOF
$(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
#同步相关信息
cp /data/k8s-work/ca*.pem /etc/kubernetes/ssl/
scp /data/k8s-work/ca*.pem ma02:/etc/kubernetes/ssl/
scp /data/k8s-work/ca*.pem ma03:/etc/kubernetes/ssl/
cp /data/k8s-work/kube-apiserver/token.csv /etc/kubernetes/
scp /data/k8s-work/kube-apiserver/token.csv ma02:/etc/kubernetes/
scp /data/k8s-work/kube-apiserver/token.csv ma03:/etc/kubernetes/
cp /data/k8s-work/kube-apiserver/kube-apiserver*.pem /etc/kubernetes/ssl/
scp /data/k8s-work/kube-apiserver/kube-apiserver*.pem ma02:/etc/kubernetes/ssl/
scp /data/k8s-work/kube-apiserver/kube-apiserver*.pem ma03:/etc/kubernetes/ssl/
#创建apiserver服务的配置文件[ma01,ma02,ma03]
#ma01
cat >/etc/kubernetes/kube-apiserver.conf <<EOF
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
--anonymous-auth=false \
--bind-address=192.168.196.100 \
--advertise-address=192.168.196.100 \
--secure-port=9443 \
--authorization-mode=Node,RBAC \
--runtime-config=api/all=true \
--enable-bootstrap-token-auth \
--service-cluster-ip-range=10.96.0.0/16 \
--token-auth-file=/etc/kubernetes/token.csv \
--service-node-port-range=30000-32767 \
--tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \
--client-ca-file=/etc/kubernetes/ssl/ca.pem \
--kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \
--kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-account-issuer=api \
--etcd-cafile=/etc/etcd/ssl/ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--etcd-servers=https://192.168.196.100:2379,https://192.168.196.101:2379,https://192.168.196.102:2379 \
--allow-privileged=true \
--apiserver-count=3 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/log/kube-apiserver-audit.log \
--requestheader-allowed-names=aggregator \
--requestheader-allowed-names=front-proxy-client \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-group-headers=X-Remote-Group \
--requestheader-username-headers=X-Remote-User \
--requestheader-client-ca-file=/etc/kubernetes/ssl/agg_ca.pem \
--proxy-client-cert-file=/etc/kubernetes/ssl/metrics-server.pem \
--proxy-client-key-file=/etc/kubernetes/ssl/metrics-server-key.pem \
--enable-aggregator-routing=true \
--event-ttl=1h \
--v=4"
EOF
#ma02
cat >/etc/kubernetes/kube-apiserver.conf <<EOF
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
--anonymous-auth=false \
--bind-address=192.168.196.101 \
--advertise-address=192.168.196.101 \
--secure-port=9443 \
--authorization-mode=Node,RBAC \
--runtime-config=api/all=true \
--enable-bootstrap-token-auth \
--service-cluster-ip-range=10.96.0.0/16 \
--token-auth-file=/etc/kubernetes/token.csv \
--service-node-port-range=30000-32767 \
--tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \
--client-ca-file=/etc/kubernetes/ssl/ca.pem \
--kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \
--kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-account-issuer=api \
--etcd-cafile=/etc/etcd/ssl/ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--etcd-servers=https://192.168.196.100:2379,https://192.168.196.101:2379,https://192.168.196.102:2379 \
--allow-privileged=true \
--apiserver-count=3 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/log/kube-apiserver-audit.log \
--requestheader-allowed-names=aggregator \
--requestheader-allowed-names=front-proxy-client \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-group-headers=X-Remote-Group \
--requestheader-username-headers=X-Remote-User \
--requestheader-client-ca-file=/etc/kubernetes/ssl/agg_ca.pem \
--proxy-client-cert-file=/etc/kubernetes/ssl/metrics-server.pem \
--proxy-client-key-file=/etc/kubernetes/ssl/metrics-server-key.pem \
--enable-aggregator-routing=true \
--event-ttl=1h \
--v=4"
EOF
#ma03
cat >/etc/kubernetes/kube-apiserver.conf <<EOF
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
--anonymous-auth=false \
--bind-address=192.168.196.102 \
--advertise-address=192.168.196.102 \
--secure-port=9443 \
--authorization-mode=Node,RBAC \
--runtime-config=api/all=true \
--enable-bootstrap-token-auth \
--service-cluster-ip-range=10.96.0.0/16 \
--token-auth-file=/etc/kubernetes/token.csv \
--service-node-port-range=30000-32767 \
--tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \
--client-ca-file=/etc/kubernetes/ssl/ca.pem \
--kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \
--kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-account-issuer=api \
--etcd-cafile=/etc/etcd/ssl/ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--etcd-servers=https://192.168.196.100:2379,https://192.168.196.101:2379,https://192.168.196.102:2379 \
--allow-privileged=true \
--apiserver-count=3 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/log/kube-apiserver-audit.log \
--requestheader-allowed-names=aggregator \
--requestheader-allowed-names=front-proxy-client \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-group-headers=X-Remote-Group \
--requestheader-username-headers=X-Remote-User \
--requestheader-client-ca-file=/etc/kubernetes/ssl/agg_ca.pem \
--proxy-client-cert-file=/etc/kubernetes/ssl/metrics-server.pem \
--proxy-client-key-file=/etc/kubernetes/ssl/metrics-server-key.pem \
--enable-aggregator-routing=true \
--event-ttl=1h \
--v=4"
EOF
注意:关于聚合层说明
#聚合层专用参数
–requestheader-allowed-names=aggregator
–requestheader-allowed-names=front-proxy-client
–requestheader-extra-headers-prefix=X-Remote-Extra-
–requestheader-group-headers=X-Remote-Group
–requestheader-username-headers=X-Remote-User
–requestheader-client-ca-file=/etc/kubernetes/ssl/agg_ca.pem
–proxy-client-cert-file=/etc/kubernetes/ssl/metrics-server.pem
–proxy-client-key-file=/etc/kubernetes/ssl/metrics-server-key.pem
#K8s 官网关于聚合层说明
https://v1-28.docs.kubernetes.io/zh-cn/docs/tasks/extend-kubernetes/configure-aggregation-layer/
#配置aggregator聚合层专用证书与密钥
agg聚合层专用ca与metrics-server-csr.json 的证书生成
cd /data/k8s-work/agg_cert
cat > agg-ca-csr.json <<"EOF"
{
"CN": "agg",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "kubemsb",
"OU": "CN"
}
],
"ca": {
"expiry": "87600h"
}
}
EOF
cfssl gencert -initca agg-ca-csr.json | cfssljson -bare agg_ca
cat > agg_ca-config.json <<"EOF"
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"agg": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
}
}
}
}
EOF
#该文件的CN要与api-server 中的--requestheader-allowed-names参数的名称一致
cat > metrics-server-csr.json << EOF
{
"CN": "aggregator",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
#生成证书
cfssl gencert -ca=agg_ca.pem -ca-key=agg_ca-key.pem -config=agg_ca-config.json -profile=agg metrics-server-csr.json | cfssljson -bare metrics-server
#同步
cp /data/k8s-work/agg_cert/agg_ca*.pem /etc/kubernetes/ssl/
cp /data/k8s-work/agg_cert/metrics-server*.pem /etc/kubernetes/ssl/
scp /data/k8s-work/agg_cert/agg_ca*.pem ma02:/etc/kubernetes/ssl/
scp /data/k8s-work/agg_cert/metrics-server*.pem ma02:/etc/kubernetes/ssl/
scp /data/k8s-work/agg_cert/agg_ca*.pem ma03:/etc/kubernetes/ssl/
scp /data/k8s-work/agg_cert/metrics-server*.pem ma03:/etc/kubernetes/ssl/
ls -l
#创建apiserver服务管理配置文件[ma01,ma02,ma03]
cat > /etc/systemd/system/kube-apiserver.service << "EOF"
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service
[Service]
EnvironmentFile=-/etc/kubernetes/kube-apiserver.conf
ExecStart=/usr/local/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
#启动与测试kube-apiserver
systemctl daemon-reload
systemctl enable --now kube-apiserver
systemctl status kube-apiserver
# 测试
curl --insecure https://192.168.196.100:9443/
curl --insecure https://192.168.196.101:9443/
curl --insecure https://192.168.196.102:9443/
curl --insecure https://192.168.196.200:6443/
4.2 部署kubectl
#创建kubectl 证书请求文件
cat > admin-csr.json << "EOF"
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "system:masters",
"OU": "system"
}
]
}
EOF
#生成证书文件
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
#复制文件到指定目录
cp admin*.pem /etc/kubernetes/ssl/
scp admin*.pem ma02:/etc/kubernetes/ssl/
scp admin*.pem ma03:/etc/kubernetes/ssl/
#将所有密钥放在/data/k8s-work目录下,因为前期做了分类时用的mv方式,现在考回
cd /data/k8s-work
cp admin/* ./
cp agg_cert/* ./
cp etcd_cert/* ./
cp kube-apiserver/ ./
cp kube-apiserver/* ./
#生成上下文配置信息,该文件可以用于切换集群实例
# 生成kube.config配置文件
# 该命令设置一个名为kubernetes的就请你,它指定了集群的CA证书文件(ca.pem)、是否将证书嵌入到kubeconfig文件中(--embed-certs=true)、
# 以及集群的API服务器地址(https://192.168.196.200:6443)。配置结果会保存在指定的kube.config文件中。
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.196.200:6443 --kubeconfig=kube.config
# 设置用户凭证
#该命令设置一个名为"admin" 的用户凭证,指定了用户的客户端证书(admin.pem)和客户端密钥(admin-key.pem)。同样,凭证信息会被存储在kube.config 文件中。
kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=kube.config
# 配置上线文信息
# 这条命令创建一个名为"kubernetes"的上下文,它将之前设置的集群(kubernetes)和用户(admin)关联起来。上下文用于在执行kubectl 命令时指定使用哪个集群和用户。
kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kube.config
# 使用上下文
# 这条命令告诉kubectl 使用之前设置的名为"kubernetes" 的上下文。这意味着后续的kubectl 命令将会使用这个上下文中定义的集群和用户进行操作。
kubectl config use-context kubernetes --kubeconfig=kube.config
#查看信息
cat kube.config
#准备kubectl 配置文件并进行角色绑定
#ma01、ma02、ma03
#拷贝kube.config 到/root/kube 目录下,并重命名为config
#kube/config 属于授权文件,有该文件的情况下,该服务器才有权限使用kubectl 进行集群的切换
#该config文件可用于集群中不同环境的切换
mkdir -p /root/.kube
cp kube.config ~/.kube/config
scp kube.config ma02:/root/.kube/config
scp kube.config ma03:/root/.kube/config
该命令用于在Kubernetes集群中创建一个ClusterRoleBindong、具体作用如下:
ClusterRoleBinding:这是一个将ClusterRole用户、组或服务账户绑定的对象。通过创建ClusterRoleBinding,您可以为特定的用户或服务账户授予访问特定资源的权限。
kube-apiserver:kubelet-apis:这是该ClusterRoleBinding的名称。通常,这种命名方式有助于识别该绑定的用途。
–clusterrole=system:kubelet-api-amdin:该参数指定了要绑定的ClusterRole。在这里,system:kubelet-api-admin 是一个内置的ClusterRole,授予对kubelet API 的管理权限。
–user kubernetes:这表示您将ClusterRole 绑定到名为Kubernetes 的用户。这个用户将获得system:kubelet-api-admin角色的权限。
–kubeconfig=/root/.kube/config :指定用于访问Kubernetes API的kubeconfig 文件路径。这个文件通常包含集群的连接信息和凭据。
[ma01上执行就好]
kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes --kubeconfig=/root/.kube/config
#查看集群信息
kubectl cluster-info
#查看集群组件状态[其它组件未安装]
#命令
kubectl get componentstatuses
#结果
controller-manager Unhealthy Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused
scheduler Unhealthy Get "https://127.0.0.1:10259/healthz": dial tcp 127.0.0.1:10259: connect: connection refused
etcd-0 Healthy ok
#查看命名空间中资源对象
kubectl get all --all-namespaces
4.3 部署kube-controller-manager
#创建kube-controller-manager 证书请求文件
cat > kube-controller-manager-csr.json <<EOF
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"hosts": [
"127.0.0.1",
"192.168.196.100",
"192.168.196.101",
"192.168.196.102"
],
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "system:kube-controller-manager",
"OU": "system"
}
]
}
EOF
创建kube-controller-manager证书文件
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes \
kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
#生成的文件如下
kube-controller-manager.csr
kube-controller-manager-csr.json
kube-controller-manager-key.pem
kube-controller-manager.pem
#拷贝文件到相关目录下
mv kube-controller-manager* ./kube-controller-manager
cp kube-controller-manager/* ./
cp /data/k8s-work/kube-controller-manager*.pem /etc/kubernetes/ssl/
scp /data/k8s-work/kube-controller-manager*.pem ma02:/etc/kubernetes/ssl/
scp /data/k8s-work/kube-controller-manager*.pem ma03:/etc/kubernetes/ssl/
#创建kube-controller-manage的kube-controller-manager.kubeconfig
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.196.200:6443 --kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
#同步
cp /data/k8s-work/kube-controller-manager.kubeconfig /etc/kubernetes/
scp /data/k8s-work/kube-controller-manager.kubeconfig ma02:/etc/kubernetes/
scp /data/k8s-work/kube-controller-manager.kubeconfig ma03:/etc/kubernetes/
#创建kube-controller-manager配置文件[ma01,ma02.ma03]
cat > /etc/kubernetes/kube-controller-manager.conf << "EOF"
KUBE_CONTROLLER_MANAGER_OPTS=" \
--secure-port=10257 \
--bind-address=127.0.0.1 \
--kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
--service-cluster-ip-range=10.96.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
--allocate-node-cidrs=true \
--cluster-cidr=10.244.0.0/16 \
--root-ca-file=/etc/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
--leader-elect=true \
--feature-gates=RotateKubeletServerCertificate=true \
--controllers=*,bootstrapsigner,tokencleaner \
--tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \
--use-service-account-credentials=true \
--v=2"
EOF
#创建服务启动文件[ma01,ma02.ma03]
cat > /usr/lib/systemd/system/kube-controller-manager.service << "EOF"
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/etc/kubernetes/kube-controller-manager.conf
ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
#拷贝证书与密钥到相关目录下
cp kube-controller-manager*.pem /etc/kubernetes/ssl/
cp kube-controller-manager.kubeconfig /etc/kubernetes/
cp kube-controller-manager.conf /etc/kubernetes/
cp kube-controller-manager.service /usr/lib/systemd/system/
scp kube-controller-manager*.pem ma02:/etc/kubernetes/ssl/
scp kube-controller-manager*.pem ma03:/etc/kubernetes/ssl/
scp kube-controller-manager.kubeconfig kube-controller-manager.conf ma02:/etc/kubernetes/
scp kube-controller-manager.kubeconfig kube-controller-manager.conf ma03:/etc/kubernetes/
scp kube-controller-manager.service ma02:/usr/lib/systemd/system/
scp kube-controller-manager.service ma03:/usr/lib/systemd/system/
#启动
systemctl daemon-reload
systemctl enable --now kube-controller-manager
systemctl status kube-controller-manager
4.4 部署kube-scheduler
#配置证书请求json文件
cat > kube-scheduler-csr.json << "EOF"
{
"CN": "system:kube-scheduler",
"hosts": [
"127.0.0.1",
"192.168.196.100",
"192.168.196.101",
"192.168.196.102"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "system:kube-scheduler",
"OU": "system"
}
]
}
EOF
#生成kube-scheduler 证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
#生成的证书如下
kube-scheduler.csr
kube-scheduler-csr.json
kube-scheduler-key.pem
kube-scheduler.pem
#同步
cp /data/k8s-work/kube-scheduler*.pem /etc/kubernetes/ssl/
scp /data/k8s-work/kube-scheduler*.pem ma02:/etc/kubernetes/ssl/
scp /data/k8s-work/kube-scheduler*.pem ma03:/etc/kubernetes/ssl/
#创建kube-scheduler的kubeconfig
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.196.200:6443 --kubeconfig=kube-scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig
kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
#同步
cp /data/k8s-work/kube-scheduler.kubeconfig /etc/kubernetes/
scp /data/k8s-work/kube-scheduler.kubeconfig ma02:/etc/kubernetes/
scp /data/k8s-work/kube-scheduler.kubeconfig ma03:/etc/kubernetes/
#创建配置文件
cat > /etc/kubernetes/kube-scheduler.conf << "EOF"
KUBE_SCHEDULER_OPTS=" \
--kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
--leader-elect=true \
--v=2"
EOF
#创建服务启动配置文件
cat > /usr/lib/systemd/system/kube-scheduler.service << "EOF"
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/kube-scheduler.conf
ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
#启动服务
systemctl daemon-reload
systemctl enable --now kube-scheduler
systemctl status kube-scheduler
#检查
kubectl get cs
第五步 工具节点(worker node部署)
#部署cri-dockerd
cd /soft
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.9/cri-dockerd-0.3.9-3.el7.x86_64.rpm
yum install -y cri-dockerd-0.3.9-3.el7.x86_64.rpm
vi /usr/lib/systemd/system/cri-docker.service
#修改第10行内容,默认启动的pod镜像太低,指定到3.9版本。使用阿里云的镜像仓库,国内下载镜像会比较快
ExecStart=/usr/bin/cri-dockerd --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9 --container-runtime-endpoint fd://
#启动
systemctl enable --now cri-docker
systemctl status cri-docker
ll /run/cri-dockerd.sock
5.1 部署kubelet
#ma01、ma02、ma03
#node节点上操作
#创建kubelet-bootstrap.kubeconfig
#该命令用于获取token 信息
BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/token.csv)
#该命令用于kubernetes集群的配置,命名为kubernetes,指定CA证书文件,–embed-certs 表示将证书嵌入到kubeconfig文件中。
#指定Kubernetes API服务器的地址
#最后指定要修改的kubeconfig文件。
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.196.200:6443 --kubeconfig=kubelet-bootstrap.kubeconfig
#设置用户的凭证
kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap.kubeconfig
#设置上下文
kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig
#表示将当前上下文切换到default上下文。
kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig
#创建集群角色绑定
kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=kubelet-bootstrap
#创建另一个集群角色绑定
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig
#创建kubelet配置文件
#在node节点操作,ma01、ma02、ma03
mkdir -p /etc/kubernetes/ssl
#需要修改 address 的地址,该地址需要修改为不同服务器的IP地址。
cat > /etc/kubernetes/kubelet.json << "EOF"
{
"kind": "KubeletConfiguration",
"apiVersion": "kubelet.config.k8s.io/v1beta1",
"authentication": {
"x509": {
"clientCAFile": "/etc/kubernetes/ssl/ca.pem"
},
"webhook": {
"enabled": true,
"cacheTTL": "2m0s"
},
"anonymous": {
"enabled": false
}
},
"authorization": {
"mode": "Webhook",
"webhook": {
"cacheAuthorizedTTL": "5m0s",
"cacheUnauthorizedTTL": "30s"
}
},
"address": "192.168.196.100",
"port": 10250,
"readOnlyPort": 10255,
"cgroupDriver": "systemd",
"hairpinMode": "promiscuous-bridge",
"serializeImagePulls": false,
"clusterDomain": "cluster.local.",
"clusterDNS": ["10.96.0.2"]
}
EOF
#创建kubelet 服务启动管理文件
mkdir /var/lib/kubelet
cat > /usr/lib/systemd/system/kubelet.service << "EOF"
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \
--bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \
--cert-dir=/etc/kubernetes/ssl \
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
--config=/etc/kubernetes/kubelet.json \
--container-runtime-endpoint=unix:///run/cri-dockerd.sock \
--rotate-certificates \
--pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9 \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
#同步文件与启动
for i in ma01 ma02 ma03;do scp /usr/lib/systemd/system/kubelet.service $i:/usr/lib/systemd/system/;done
for i in ma01 ma02 ma03;do scp kubelet-bootstrap.kubeconfig $i:/etc/kubernetes/;done
for i in ma01 ma02 ma03;do scp ca.pem $i:/etc/kubernetes/ssl;done
#拷贝命令到
cd /soft/kubernetes/server/bin/
for i in ma01 ma02 ma03;do scp kubelet kube-scheduler $i:/usr/local/bin/;done
#启动服务
systemctl daemon-reload
systemctl enable --now kubelet
systemctl status kubelet
#测验
kubectl get nodes
5.2 部署kube-proxy证书请求文件
cat > kube-proxy-csr.json << "EOF"
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "kubemsb",
"OU": "CN"
}
]
}
EOF
#生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
#查看
ls -l kube-proxy*
#生成的证书如下
kube-proxy.csr
kube-proxy-csr.json
kube-proxy-key.pem
kube-proxy.pem
#创建kubeconfig 文件
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.196.200:6443 --kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
#创建服务配置文件
#ma01\ma02\ma03
bindAddress 需要根据不同的主机进行修改IP地址
cat > /etc/kubernetes/kube-proxy.yaml << "EOF"
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 192.168.196.100
clientConnection:
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 10.244.0.0/16
healthzBindAddress: 192.168.196.100:10256
kind: KubeProxyConfiguration
metricsBindAddress: 192.168.196.100:10249
mode: "ipvs"
EOF
#创建服务器启动管理文件
#创建proxy的工作目录,和服务启动文件对应
mkdir -p /var/lib/kube-proxy
#配置启动文件
cat > /usr/lib/systemd/system/kube-proxy.service << "EOF"
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \
--config=/etc/kubernetes/kube-proxy.yaml \
--v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
#同步文件
for i in ma01 ma02 ma03;do scp kube-proxy.kubeconfig $i:/etc/kubernetes/; done
for i in ma01 ma02 ma03;do scp kube-proxy*pem $i:/etc/kubernetes/ssl; done
#启动服务
systemctl daemon-reload
systemctl enable --now kube-proxy
systemctl status kube-proxy
systemctl daemon-reload
systemctl restart kube-proxy
第六步 网络组件部署Calio
官网:https://docs.tigera.io/calico/latest/about
#先使用wget下载后,检查文件正常后在进行部署
wget https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/tigera-operator.yaml
wget https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/custom-resources.yaml
#custom-resources.yaml文件默认的pod网络为192.168.0.0/16,我们定义的pod网络为10.244.0.0/16,需要修改后再执行
cat custom-resources.yaml
cidr: 192.168.0.0/16 修改成 cidr: 10.244.0.0/16
#应用文件
kubectl create -f tigera-operator.yaml
#查看有哪些命名空间
kubectl get ns
#查看命名空间中有哪些pod
kubectl get pod -n tigera-operator
#应用文件[应用该文件,需要翻墙软件打开,并安装一些组件,该过程因为资源问题进行了多次HAProxy的切换]
kubectl create -f custom-resources.yaml
kubectl get ns
#查看该命名空间下的pod的下载安装运行情况
kubectl get pod -n calico-system
kubectl get nodes
第七步 部署CoreDNS
cat > coredns.yaml << "EOF"
apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:coredns
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
loop
reload
loadbalance
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/name: "CoreDNS"
spec:
# replicas: not specified here:
# 1. Default is 1.
# 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
spec:
priorityClassName: system-cluster-critical
serviceAccountName: coredns
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
nodeSelector:
kubernetes.io/os: linux
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: k8s-app
operator: In
values: ["kube-dns"]
topologyKey: kubernetes.io/hostname
containers:
- name: coredns
image: coredns/coredns:1.10.1
imagePullPolicy: IfNotPresent
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
readOnly: true
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /ready
port: 8181
scheme: HTTP
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
annotations:
prometheus.io/port: "9153"
prometheus.io/scrape: "true"
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.96.0.2
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
- name: metrics
port: 9153
protocol: TCP
EOF
#应用文件
kubectl apply -f coredns.yaml
#查看
kubectl get pods -o wide
kubectl get pod -n kube-system -o wide
#验证DNS域名解析是否正常
dig -t a www.baidu.com @10.96.0.2
#部署应用验证
kubectl create ns my-nginx
kubectl create deploy my-nginx --image=nginx:1.23.0 -n my-nginx --dry-run -o yaml >> my-nginx.yaml
kubectl apply -f my-nginx.yaml
kubectl expose deployment my-nginx --port=80 --target-port=80 --type=NodePort -n my-nginx --dry-run -o yaml >> nginx-svc.yaml
kubectl apply -f nginx-svc.yaml
#根据查询的nginx 的信息,获取nginx的暴露端口
kubectl get all -n my-nginx
#根据nginx的暴露端口是30604,进行访问验证,因为NGINX 没有添加证书认证,所有不需要https的访问协议
http://192.168.196.200:32111
#安装命令补全安装包
yum install bash-completion -y
#生效
source <(kubectl completion bash)
#未生效还需执行下方的命令
source /usr/share/bash-completion/bash_completion
#获取所有的资源信息
kubectl api-resources
#获取组件状态信息[componentstatuses的缩写是cs]
kubectl get cs
八 安装部署metric-server
#旧版本的监控组件是heapster
#新版本采用Metrics-server 聚合器采集cAdvisor数据信息。cAdvisor 聚合器内嵌到kubectl 中。
#监控集群资源需要安装Metrics-server
#监控的流程:kubectl top -> apiserver -> metrics-server pod -> kubelet(cadvisor)
#Metrics-server 的安装
wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
#配置components.yaml文件
主要增加:- --kubelet-insecure-tls
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
k8s-app: metrics-server
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=10250
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
- --kubelet-insecure-tls
- --kubelet-use-node-status-port
- --metric-resolution=15s
image: registry.k8s.io/metrics-server/metrics-server:v0.7.2
imagePullPolicy: IfNotPresent
#安装部署
kubectl apply -f components.yaml
kubectl get deployment metrics-server -n kube-system
kubectl get --raw /apis/metrics.k8s.io/v1beta1/pods
kubectl logs -n kube-system -l k8s-app=metrics-server
#如果因为网络的问题,pull错误后,需要删除pod ,再进行apply
kubectl delete pod metrics-server-75bf97fcc9-ch4dh -n kube-system
#获取kube-system 命名空间中的pods,获取到pod名称
kubectl get pods -n kube-system
#查看pod的状态
kubectl describe pod metrics-server-75bf97fcc9-g5r9v -n kube-system
kubectl logs -n kube-system -l k8s-app=metrics-server
#下面为注释状态,无需理会
#git clone https://github.com/kubernetes-incubator/metrics-server
#cd metrics-server/
#cat metrics-server-deployment.yaml
#因为在delete时,k8s部分会默认自动重新创建pod,所以需要加deployment
kubectl delete deployment metrics-server -n kube-system
#进行查看验证
kubectl --help
查看Node资源消耗
kubectl top node
查看Pod资源消耗
kubectl top pod
#其它知识
k8s 中的 cpu(core) 中1000m=1c
kubectl top node ma01 --sort-by=‘cpu’
kubectl top pods --sort-by=‘memory’
ps -ef|grep kubelet
#通过journalctl -u 的方式查看相关组件的日志
journalctl -u kubelet
journalctl -u apiservice
#倒数也就是最新输出,展示100行
kubectl logs -f metrics-server-65bc69d777-77zds -n kube-system --tail=100
#最旧
kubectl logs -f metrics-server-65bc69d777-77zds -n kube-system --tail=-100
#查看k8s的版本,服务器的版本,内核信息
kubectl get nodes -o wide
八、安装部署helm
官网:https://helm.sh/
Helm的三个重要概念:Chart、Repository和Release
wget https://get.helm.sh/helm-v3.13.3-linux-amd64.tar.gz
tar zxvf helm-v3.13.3-linux-amd64.tar.gz
cp /soft/linux-amd64/helm /usr/local/bin/helm
scp /soft/linux-amd64/helm ma02:/usr/local/bin/
scp /soft/linux-amd64/helm ma03:/usr/local/bin/
其它命令与生效
#查看版本
helm version
#命令补全
source <(helm completion bash)
或者
echo "source <(helm completion bash)" >> ~/.bashrc
source ~/.bashrc
九、安装部署Dashboard
官网:https://github.com/kubernetes/dashboard
#安装dashboard
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard
#查看
kubectl get serviceAccount,svc,deploy,pod -n kubernetes-dashboard
#如果存在pod的镜像没有下载完成,可以如下操作
kubectl get pods -n kubernetes-dashboard
kubectl delete pod kubernetes-dashboard-kong-76dff7b666-hdmgd -n kubernetes-dashboard
#上条命令没有进行重新下载安装则执行如下命令
helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard
#访问dashboard
#查看服务
kubectl get services -n kubernetes-dashboard
#更改IP类型,有多种方案。可以根据不同的暴露方案或者k8s的其它技术进行开放外部访问。
kubectl patch service kubernetes-dashboard-kong-proxy -n kubernetes-dashboard -p '{"spec":{"type":"NodePort"}}'
kubectl get services -n kubernetes-dashboard
#创建ServiceAccount和Secret
cat<<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: dashboard-admin
namespace: kube-system
---
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: dashboard-admin
namespace: kube-system
annotations:
kubernetes.io/service-account.name: "dashboard-admin"
EOF
#绑定集群
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
#查看token
kubectl describe secrets dashboard-admin -n kube-system
#根据暴露的NodePort 进行访问,网页需要输入token
https://192.168.196.100:30788/#/workloads?namespace=default