K8S服务发布
一 、服务发布方式对比
二者主要区别在于:
1、部署复杂性:传统的服务发布方式通常涉及手动配置 和管理服务器、网络设置、负载均衡等,过程相对复 杂且容易出错。相比之下,Kubernetes服务发布方式 通过使用容器编排和自动化部署工具,简化了部署过程,使得服务的创建、部署和管理更加容易和高效。
2、弹性伸缩:Kubernetes 具有自动弹性伸缩功能,可 以根据实际需求自动增加或减少容器数量,以保持服 务的稳定性和性能。传统服务发布方式通常需要手动 调整服务器规模,难以实现快速弹性伸缩。
3、容错性:Kubernetes 具有内置的容错机制,当某个 容器出现故障时,Kubernetes 会自动将其从集群中 移除,并重新启动一个新的容器。这有助于保持服务 的可用性和稳定性。传统服务发布方式可能无法很好 地处理此类故障,导致服务中断。
4、资源利用率:Kubernetes 通过容器化技术,可以实 现资源的共享和动态分配,提高了资源的利用率。传 统服务发布方式可能会因服务器资源的浪费而降低资 源利用率。
5、可维护性:Kubernetes 提供了强大的可维护性功 能,例如服务发现、配置管理和版本控制等。这些功 能有助于简化服务的维护和升级过程。传统服务发布 方式可能需要更多的手动维护工作,且升级过程可能 更加复杂。
6、成本:虽然 Kubernetes 需要一定的学习曲线和初始 配置成本,但从长远来看,其自动化部署、弹性伸缩 和资源优化等功能可以帮助企业节省大量的运维成 本。传统服务发布方式可能需要更多的运维人力和资 源投入。
二、Label 和 Selector
Label:标签,可以对 K8S 的些对象,如 Pod 和节点 进行分组,通过添加 key=value 格式的标签,用于区 分同样的资源不同的分组。
Selector:标签选择器,可以根据资源标签查询出精 确的对象信息。
1、自定义 Label
为某个节点添加标签 Label
[root@k8s-master wp]# kubectl label nodes k8s-node01 abc=1 node/k8s-node01 labeled [root@k8s-master wp]# kubectl get node -l abc=1 NAME STATUS ROLES AGE VERSION k8s-node01 Ready <none> 11d v1.28.2
在 deployment 或其他控制器中指定将 Pod 部署到该 节点
(创建nginx deployment 让他只能部署在node01上)
[root@k8s-master wp]# vim nginx-deploy.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: nodeSelector: abc: "1" containers: - name: nginx image: docker.io/library/nginx:latest imagePullPolicy: Never ports: - name: nginxport containerPort: 80 [root@k8s-master wp]# kubectl create -f nginx-deploy.yaml deployment.apps/nginx created [root@k8s-master wp]# kubectl get pod -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES cluster-test-66bb44bd88-zdfdf 1/1 Running 4 (45m ago) 10d 172.16.85.214 k8s-node01 <none> <none> mariadb-deployment-55bc54f6f5-kbbzm 1/1 Running 0 107m 172.16.58.215 k8s-node02 <none> <none> nginx-6bb6dc9958-5b965 1/1 Running 0 55s 172.16.85.219 k8s-node01 <none> <none> nginx-6bb6dc9958-hx4wg 1/1 Running 0 55s 172.16.85.218 k8s-node01 <none> <none> nginx-6bb6dc9958-wlhj9 1/1 Running 0 55s 172.16.85.220 k8s-node01 <none> <none> # pod已添加到node01节点上
2、污点信息
# 查看master的污点信息,该master为禁调度节点 [root@k8s-master wp]# kubectl describe nodes k8s-master | grep Taints Taints: node-role.kubernetes.io/control-plane:NoSchedule # 可通过该命令去除master节点的禁调度污点,但正常情况下不允许这么做,因为master节点是管理node节点的,不需要进行负载 [root@k8s-master wp]# kubectl taint node k8s-master node-role.kubernetes.io/control-plane error: at least one taint update is required [root@k8s-master wp]# kubectl taint node k8s-master node-role.kubernetes.io/control-plane:NoSchedule error: node k8s-master already has node-role.kubernetes.io/control-plane taint(s) with same effect(s) and --overwrite is false
3、为各节点设置 Label
# 为master节点设置master角色标签 [root@k8s-master wp]# kubectl label nodes k8s-master role=master node/k8s-master labeled # 为node节点设置node角色标签 [root@k8s-master wp]# kubectl label nodes k8s-node01 k8s-node02 role=node node/k8s-node01 labeled node/k8s-node02 labeled
4、Selector 选择器
# 查看所有节点的标签 [root@k8s-master wp]# kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS k8s-master Ready control-plane 11d v1.28.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=,role=master k8s-node01 Ready <none> 11d v1.28.2 abc=11,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node01,kubernetes.io/os=linux,role=node k8s-node02 Ready <none> 11d v1.28.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node02,kubernetes.io/os=linux,role=node # 匹配标签,匹配规则:role这个key的value在master和node之间 [root@k8s-master wp]# kubectl get nodes -l 'role in (master, node)' NAME STATUS ROLES AGE VERSION k8s-master Ready control-plane 11d v1.28.2 k8s-node01 Ready <none> 11d v1.28.2 k8s-node02 Ready <none> 11d v1.28.2 # 选择匹配role为master或node且不包括abc=1的node [root@k8s-master wp]# kubectl get nodes -l abc!=1,'role in (master, node)' NAME STATUS ROLES AGE VERSION k8s-master Ready control-plane 11d v1.28.2 k8s-node02 Ready <none> 11d v1.28.2 # 匹配标签的key为abc的node [root@k8s-master wp]# kubectl get nodes -l abc NAME STATUS ROLES AGE VERSION k8s-node01 Ready <none> 11d v1.28.2
5、修改标签
将 abc=1 这个标签改为 abc=11,直接指定 key ,然后使用 --overwrite 参数重写标签
[root@k8s-master wp]# kubectl label nodes k8s-node01 abc=11 error: 'abc' already has a value (1), and --overwrite is false [root@k8s-master wp]# kubectl label nodes k8s-node01 abc=11 --overwrite node/k8s-node01 labeled [root@k8s-master wp]# kubectl get node --show-labels NAME STATUS ROLES AGE VERSION LABELS k8s-master Ready control-plane 11d v1.28.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers= k8s-node01 Ready <none> 11d v1.28.2 abc=11,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node01,kubernetes.io/os=linux k8s-node02 Ready <none> 11d v1.28.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node02,kubernetes.io/os=linux [root@k8s-master wp]# kubectl get pod -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES cluster-test-66bb44bd88-zdfdf 1/1 Running 6 (29m ago) 10d 172.16.85.214 k8s-node01 <none> <none> mariadb-deployment-55bc54f6f5-kbbzm 1/1 Running 0 3h30m 172.16.58.215 k8s-node02 <none> <none> nginx-6bb6dc9958-5b965 1/1 Running 0 104m 172.16.85.219 k8s-node01 <none> <none> nginx-6bb6dc9958-hx4wg 1/1 Running 0 104m 172.16.85.218 k8s-node01 <none> <none> nginx-6bb6dc9958-wlhj9 1/1 Running 0 104m 172.16.85.220 k8s-node01 <none> <none> [root@k8s-master wp]# kubectl delete -f nginx-deploy.yaml deployment.apps "nginx" deleted [root@k8s-master wp]# kubectl create -f nginx-deploy.yaml deployment.apps/nginx created # 找不到标签为abc=1的node节点就会自动挂起 [root@k8s-master wp]# kubectl get pod -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES cluster-test-66bb44bd88-zdfdf 1/1 Running 6 (30m ago) 10d 172.16.85.214 k8s-node01 <none> <none> mariadb-deployment-55bc54f6f5-kbbzm 1/1 Running 0 3h31m 172.16.58.215 k8s-node02 <none> <none> nginx-6bb6dc9958-l4mrm 0/1 Pending 0 2s <none> <none> <none> <none> nginx-6bb6dc9958-mw29p 0/1 Pending 0 2s <none> <none> <none> <none> nginx-6bb6dc9958-v8flb 0/1 Pending 0 2s <none> <none> <none> <none>
6、删除标签
删除 node01 的 abc=11 的标签
[root@k8s-master wp]# kubectl label nodes k8s-node01 role- node/k8s-node01 unlabeled [root@k8s-master wp]# kubectl get node --show-labels | grep k8s-node01 k8s-node01 Ready <none> 11d v1.28.2 abc=11,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node01,kubernetes.io/os=linux [root@k8s-master wp]# kubectl label nodes k8s-node01 abc- node/k8s-node01 unlabeled [root@k8s-master wp]# kubectl get node --show-labels | grep k8s-node01 k8s-node01 Ready <none> 11d v1.28.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node01,kubernetes.io/os=linux
批量删除标签
# 先为多个节点设定标签 [root@k8s-master wp]# kubectl label nodes k8s-master k8s-node01 k8s-node02 type=haha node/k8s-master labeled node/k8s-node01 labeled node/k8s-node02 labeled # 直接让所有node节点删除以type为key的标签 [root@k8s-master wp]# kubectl label nodes -l type type- node/k8s-master unlabeled node/k8s-node01 unlabeled node/k8s-node02 unlabeled # 为多个节点设定标签 [root@k8s-master wp]# kubectl label nodes k8s-master k8s-node01 k8s-node02 type=haha node/k8s-master labeled node/k8s-node01 labeled node/k8s-node02 labeled # 可以使用for循环删除多个但并非全部节点的标签 [root@k8s-master wp]# for i in 01 02;do kubectl label nodes k8s-node$i type-;done node/k8s-node01 unlabeled node/k8s-node02 unlabeled [root@k8s-master wp]# kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS k8s-master Ready control-plane 11d v1.28.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=,role=master,type=haha k8s-node01 Ready <none> 11d v1.28.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node01,kubernetes.io/os=linux k8s-node02 Ready <none> 11d v1.28.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node02,kubernetes.io/os=linux
三、Service
1、K8S 的 Service
Kubernetes(k8s)中的 Service 是一种抽象概念,它定 义了一种可以访问 Pod 逻辑分组的策略。它通常通过 Label Selector 访问 Pod 组,解决了 Pod 的 IP 地址可 能发生变化的问题。
2、定义 Service
需要注意的是,Service 能够将一个接收端口映射到任意 的 targetPort,但如果 targetPort 为空,targetPort 将 被设置为与 Port 字段相同的值。targetPort 可以设置为 一个字符串,可引用 backend Pod 的一个端口的名称, 这样的话即使更改了 Pod 的端口,也不会对 Service 的 访问造成影响。
Kubernetes Service 能够支持 TCP、UDP、STCP 等协 议,默认为 TCP 协议。
# 访问server001:80 即可访问到具有 app=server001 标签的 Pod 的80 端口上。 [root@k8s-master wp]# vim server001.yaml apiVersion: v1 kind: Service metadata: name: server001 spec: selector: app: server001 # 要匹配的pod标签 ports: - port: 80 # service的端口 protocol: TCP # 协议 targetPort: 80 # 目标容器的端口 type: ClusterIP [root@k8s-master wp]# kubectl create -f server001.yaml service/server001 created [root@k8s-master wp]# kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11d server001 ClusterIP 10.96.96.142 <none> 80/TCP 13s [root@k8s-master wp]# curl 10.96.96.142:80 curl: (7) Failed connect to 10.96.96.142:80; 拒绝连接
3、创建 Service
1)使用service代理pod
[root@k8s-master wp]# kubectl run serverpodtest --image docker.io/library/nginx:alpine --image-pull-policy Never pod/serverpodtest created [root@k8s-master wp]# kubectl label pod serverpodtest app=server001 pod/serverpodtest labeled [root@k8s-master wp]# kubectl get pod --show-labels NAME READY STATUS RESTARTS AGE LABELS cluster-test-66bb44bd88-zdfdf 1/1 Running 6 (43m ago) 10d app=cluster-test,pod-template-hash=66bb44bd88 mariadb-deployment-55bc54f6f5-kbbzm 1/1 Running 0 3h45m app=mariadb,pod-template-hash=55bc54f6f5 serverpodtest 1/1 Running 0 114s app=server001,run=serverpodtest [root@k8s-master wp]# curl 10.96.96.142:80 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
2)使用service代理deployment
(重新定义了一个service)
deployment
[root@k8s-master wp]# vim deployment001.yaml apiVersion: apps/v1 kind: Deployment metadata: name: test000deployment # deploy资源名称 labels: app: test000deployment # 标签 run: abc spec: replicas: 4 selector: matchLabels: app: nginxtest # 配置pod标签 template: metadata: labels: app: nginxtest spec: containers: - name: nginx image: docker.io/library/nginx:alpine imagePullPolicy: Never ports: - name: nginxport containerPort: 80
service
[root@k8s-master wp]# vim service001.yaml apiVersion: v1 kind: Service metadata: name: nginxtest spec: selector: app: nginxtest ports: - port: 80 targetPort: 80 protocol: TCP type: ClusterIP
代理测试
[root@k8s-master wp]# kubectl create -f deployment001.yaml [root@k8s-master wp]# kubectl create -f service001.yaml [root@k8s-master wp]# kubectl get po NAME READY STATUS RESTARTS AGE cluster-test-66bb44bd88-zdfdf 1/1 Running 7 (57m ago) 10d mariadb-deployment-55bc54f6f5-kbbzm 1/1 Running 0 4h58m test000deployment-5496fcbccd-4t87g 1/1 Running 0 9m5s test000deployment-5496fcbccd-bng7r 1/1 Running 0 9m5s test000deployment-5496fcbccd-gmpxm 1/1 Running 0 9m5s test000deployment-5496fcbccd-h2kms 1/1 Running 0 9m5s [root@k8s-master wp]# kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11d nginxtest ClusterIP 10.96.99.183 <none> 80/TCP 2m37s # curl通说明service和pod连接成功 # 此时,如果删除了pod,根据deployment特性会再次建立pod,而这时pod的IP地址会发生变化 # 但根据k8s的service特性,不论后面的pod的IP如何变化,只要绑定了对应pod的标签,就可以访问到pod [root@k8s-master wp]# curl 10.96.99.183 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title>
4、Service 类型
K8S Service Type(服务类型)主要包括:
ClusterIP:在集群内部使用,默认值,只能从集群中 访问。
NodePort:在所有安装了 Kube-Proxy 的节点上打 开一个端口,此端口可以代理至后端Pod,可以通过 NodePort 从集群外部访问集群内的服务,格式为 NodelP:NodePort。
LoadBalancer:使用云提供商的负载均衡器公开服 务,成本较高。
ExternalName:通过返回定义的 CNAME 别名,没 有设置任何类型的代理,需要1.7 或更高版本kubedns 支持。
5、NodePort 类型
如果将 Service 的 Type 字段设置为 NodePort,则 K8S 将从指定范围(默认 30000-32767)内自动分配端口,也 可手动指定 NodePort,创建该 Service 后,集群中的每 个节点都将暴露一个端口,通过某个宿主机的 IP+端口即 可访问到后端应用。
定义格式如下:
[root@k8s-master wp]# vim service001.yaml apiVersion: v1 kind: Service metadata: name: nginxtest spec: selector: app: nginxtest ports: - port: 80 targetPort: 80 protocol: TCP type: NodePort
重新加载 yaml
[root@k8s-master wp]# kubectl delete -f service001.yaml service "nginxtest" deleted [root@k8s-master wp]# kubectl create -f service001.yaml service/nginxtest created # 服务的80端口映射到了宿主机的30462端口 [root@k8s-master wp]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11d nginxtest NodePort 10.96.70.126 <none> 80:30462/TCP 6s
宿主机访问测试:http://10.0.0.66:30462
修改随机端口为指定端口(范围:30000-32767)
[root@k8s-master wp]# kubectl edit svc nginxtest error: services "nginxtest" is invalid service/nginxtest edited
再次访问测试:
三个节点都是可以访问的
6、使用 Service 代理 K8S 外部 服务
使用场景:
希望在生产环境中使用某个固定的名称而非 IP 地址访 问外部的中间件服务;
希望 Service 指向另一个 Namespace 中或其他集群 中的服务;
正在将工作负载转移到 Kubernetes 集群,但是一部 分服务仍运行在 Kubernetes 集群之外的 backend。
外部Service 配置:
[root@k8s-master wp]# vim test001ext.yaml apiVersion: v1 kind: Service metadata: name: test001service spec: selector: app: ext ports: - port: 80 targetPort: 80 protocol: TCP type: ClusterIP # 修改类型为ClusterIP,只需要k8s集群访问 --- apiVersion: v1 kind: Endpoints metadata: name: ext labels: app: ext subsets: - addresses: - ip: 10.0.0.99 # 更改为另一台主机的IP,这台主机要安装nginx,方便测试 ports: - name: nginx port: 80 protocol: TCP [root@k8s-master wp]# kubectl create -f test001ext.yaml service/test001service created endpoints/ext created
四、实战:Ingress 搭建 Nginx+WP 论坛 +MariaDB
1、网站架构
本次部署形式完全舍弃 Docker,将所有应用都置于 Kubernetes,采用 Deployment 而非单 Pod 部署,稳 定性得到升级。、
在上次实战环节,Nginx 作为反向代理服务器,在本次 实战中,Nginx 便作为 Ingress Controller。Wordpress 扩容到两个,提高可用性。而 MariaDB 因为要保证数据 一致性,所以只用一个实例。
2、部署 MariaDB
1)定义 ConfigMap
# configmap [root@k8s-master aa]# vim mariadb-configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: mariadb-configmap data: USER: "wp" PASSWORD: "123" ROOT_PASSWORD: "123" DATABASE: "db"
2)定义 Deployment
# deployment [root@k8s-master aa]# vim mariadb-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: mariadb-deployment labels: app: mariadb-deployment spec: replicas: 1 selector: matchLabels: app: mariadb-deployment template: metadata: labels: app: mariadb-deployment spec: containers: - name: mariadb image: docker.io/library/mariadb:latest imagePullPolicy: Never ports: - name: mariadbport containerPort: 3306 envFrom: - prefix: "MARIADB_" configMapRef: name: mariadb-configmap
3)定义 Service
# service [root@k8s-master aa]# vim mariadb-service.yaml apiVersion: v1 kind: Service metadata: name: mariadb-service spec: selector: app: mariadb-deployment # 指定mariaDB的deployment标签 ports: - port: 3306 targetPort: 3306 protocol: TCP type: NodePort
4)启动 MariaDB
[root@k8s-master aa]# kubectl create -f mariadb-configmap.yaml configmap/mariadb-configmap created [root@k8s-master aa]# kubectl get cm NAME DATA AGE kube-root-ca.crt 1 11d mariadb-configmap 4 13m [root@k8s-master aa]# kubectl create -f mariadb-deployment.yaml deployment.apps/mariadb-deployment created [root@k8s-master aa]# kubectl get pod -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES cluster-test-66bb44bd88-zdfdf 1/1 Running 9 (48m ago) 11d 172.16.85.214 k8s-node01 <none> <none> mariadb-deployment-5bf6d9f98c-lxlhm 1/1 Running 0 4m47s 172.16.58.221 k8s-node02 <none> <none> [root@k8s-master aa]# mysql -h172.16.58.221 -p123 MariaDB [(none)]> [root@k8s-master aa]# kubectl create -f mariadb-service.yaml service/mariadb-service created [root@k8s-master aa]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11d mariadb-service NodePort 10.96.247.148 <none> 3306:30318/TCP 24s
使用测试工具进行测试(访问解析域名)
[root@k8s-master aa]# kubectl get pod NAME READY STATUS RESTARTS AGE cluster-test-66bb44bd88-zdfdf 1/1 Running 10 (105s ago) 11d [root@k8s-master aa]# kubectl exec -it pods/cluster-test-66bb44bd88-zdfdf -- bash (08:44 cluster-test-66bb44bd88-zdfdf:/) nslookup mariadb-service Server: 10.96.0.10 Address: 10.96.0.10#53 Name: mariadb-service.default.svc.cluster.local Address: 10.96.247.148
3、部署 Wrodpress
1)定义 ConfigMap
[root@k8s-master aa]# vim wordpress-configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: wordpress-configmap data: NAME: "db" USER: "wp" PASSWORD: "123" HOST: "mariadb-service" # 将此处写为mariadb的service名称,不用再写IP
2)定义Deployment
[root@k8s-master aa]# vim wordpress-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: wordpress-deployment labels: app: wordpress-deployment spec: replicas: 2 selector: matchLabels: app: wordpress-deployment template: metadata: labels: app: wordpress-deployment spec: containers: - name: wp image: docker.io/library/wordpress:latest imagePullPolicy: Never ports: - name: wordpressprot containerPort: 80 envFrom: - prefix: "WORDPRESS_DB_" configMapRef: name: wordpress-configmap
3)定义 Service
[root@k8s-master aa]# vim wordpress-service.yaml apiVersion: v1 kind: Service metadata: name: wordpress-service spec: selector: app: wordpress-deployment ports: - name: http port: 80 targetPort: 80 nodePort: 32000 protocol: TCP type: NodePort
4)启动 WordPress
[root@k8s-master aa]# kubectl create -f wordpress-configmap.yaml configmap/wordpress-configmap created [root@k8s-master aa]# kubectl get cm NAME DATA AGE kube-root-ca.crt 1 11d mariadb-configmap 4 84m wordpress-configmap 4 1s [root@k8s-master aa]# kubectl create -f wordpress-deployment.yaml deployment.apps/wordpress-deployment created [root@k8s-master aa]# kubectl get deployments.apps NAME READY UP-TO-DATE AVAILABLE AGE cluster-test 1/1 1 1 11d mariadb-deployment 1/1 1 1 74m wordpress-deployment 2/2 2 2 2s [root@k8s-master aa]# kubectl create -f wordpress-service.yaml service/wordpress-service created [root@k8s-master aa]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 12d mariadb-service NodePort 10.96.247.148 <none> 3306:30318/TCP 62m wordpress-service NodePort 10.96.67.149 <none> 80:32000/TCP 2s
现在就可以做访问网站测试了,此时是没有 Ingress 的
4、访问测试
访问 10.0.0.66:32000
五、Ingress
1、Ingress 概念
Ingress 是 Kubernetes 中的一种资源对象,用于将外部 流量路由到集群内部的服务。它充当了一个入口 (gateway) 的角色,可以根据不同的规则将流量导向到不 同的服务,从而实现负载均衡和路由管理。
使用 ingress 可以将多个服务公开给外部网络,并进行 灵活的流量控制。通常与 ingress controller(如 nginx ingress controller、HAProxy 等)一起使用,ingress controller 负责实际的流量管理和请求转发操作。
Service 是由 kube-proxy 控制的四层负载均衡(端口+ IP),而 Ingress 是七层负载均衡的 API 对象(主机名、 URI、请求头、证书等)。
2、Ingress Controller
Service 本身是没有服务能力的,它只是一些 iptables 规 则,真正配置、应用这些规则的实际上是节点里的 kube-proxy 组件。如果没有 kube-proxy,Service 定义 得再完善也没有用。
Ingress 也只是一些 HTTP 路由规则的集合,相当于一份 静态的描述文件,真正要把这些规则在集群里实施运 行,还需要有另外一个东西,这就是 Ingress Controller,它的作用就相当于 Service 的 kubeproxy,能够读取、应用 Ingress 规则,处理、调度流 量。
Nginx 是代理服务器中稳定性最好、性能最高的产品, 所以它可以被誉为 Kubernetes 里应用最为广泛的 Ingress Controller。
3、Ingress Class
随着 Ingress 在实践中的大量应用,很多用户发现 Ingress+Ingress Controller 这种用法会带来一些问题, 比如:
由于某些原因,项目组需要引入不同的 Ingress Controller,但 Kubernetes 不允许这样做。
Ingress 规则太多,都交给一个 Ingress Controller 处 理会让它不堪重负。
多个 Ingress 对象没有很好的逻辑分组方式,管理和 维护成本很高。
集群里有不同的租户,他们对 Ingress 的需求差异很 大甚至有冲突,无法部署在同一个 Ingress Controller 上。
而 K8S 提出了一个 Ingress Class 的概念,让它插在 Ingress 和 Ingress Controller 中间,作为流量规则和控 制器的协调人,解除了 Ingress 和 Ingress Controller 的 强绑定关系。
Kubernetes 用户可以转向管理 Ingress Class,用它来 定义不同的业务逻辑分组,简化 Ingress 规则的复杂 度。
比如说,我们可以用 Class A 处理博客流量、Class B 处 理短视频流量、Class C 处理购物流量。
4、部署 Nginx Ingress Controller
1)安装 Ingress Controller
[root@k8s-master aa]# kubectl create -f ingress.yaml [root@k8s-master aa]# kubectl get pod -n ingress-nginx NAME READY STATUS RESTARTS AGE ingress-nginx-admission-create-mp7xv 0/1 Completed 0 21m ingress-nginx-admission-patch-922p4 0/1 Completed 0 21m ingress-nginx-controller-674f66cf96-6wljn 1/1 Running 0 21m [root@k8s-master aa]# kubectl describe pod -n ingress-nginx ingress-nginx-controller-674f66cf96-6wljn Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 25m default-scheduler Successfully assigned ingress-nginx/ingress-nginx-controller-674f66cf96-6wljn to k8s-node01 Warning FailedMount 23m (x8 over 25m) kubelet MountVolume.SetUp failed for volume "webhook-cert" : secret "ingress-nginx-admission" not found Normal Pulling 22m kubelet Pulling image "registry.cn-beijing.aliyuncs.com/dotbalo/ingress-nginx-controller:v1.7.1" Normal Pulled 20m kubelet Successfully pulled image "registry.cn-beijing.aliyuncs.com/dotbalo/ingress-nginx-controller:v1.7.1" in 2m27.43s (2m27.43s including waiting) Normal Created 20m kubelet Created container controller Normal Started 20m kubelet Started container controller Normal RELOAD 20m nginx-ingress-controller NGINX reload triggered due to a change in configuration [root@k8s-master aa]# kubectl get svc -A | grep ingress-nginx NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx ingress-nginx-controller NodePort 10.96.144.127 <none> 80:30886/TCP,443:31360/TCP 27m ingress-nginx ingress-nginx-controller-admission ClusterIP 10.96.126.23 <none> 443/TCP 27m # 记住ingress-nginx-controller的端口映射80=>30886
2)部署 Ingress
[root@k8s-master aa]# vim ingress-001.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: wp-ingress # Ingress名称 namespace: default spec: ingressClassName: nginx # Controller类型 rules: # 路由规则 - host: wp-web.com # 域名 http: paths: - path: / pathType: ImplementationSpecific # 路径匹配⽅式 backend: service: name: wordpress-service # service名称,这里要匹配到wordpress的service port: number: 80 # service的端⼝号 [root@k8s-master aa]# kubectl create -f ingress-001.yaml ingress.networking.k8s.io/wp-ingress created [root@k8s-master ~]# kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE wp-ingress nginx wp-web.com 10.0.0.77 80 9h [root@k8s-master aa]# kubectl get pod NAME READY STATUS RESTARTS AGE cluster-test-66bb44bd88-zdfdf 1/1 Running 15 (37m ago) 11d mariadb-deployment-5bf6d9f98c-lxlhm 1/1 Running 1 (97m ago) 18h wordpress-deployment-d6cd85444-74848 1/1 Running 1 (97m ago) 16h wordpress-deployment-d6cd85444-hnxxh 1/1 Running 1 (97m ago) 16h [root@k8s-master aa]# kubectl exec -it cluster-test-66bb44bd88-zdfdf -- bash (02:39 cluster-test-66bb44bd88-zdfdf:/) nslookup wordpress-service Server: 10.96.0.10 Address: 10.96.0.10#53 Name: wordpress-service.default.svc.cluster.local Address: 10.96.67.149 [root@k8s-master ~]# kubectl get pod -n ingress-nginx NAME READY STATUS RESTARTS AGE ingress-nginx-admission-create-mp7xv 0/1 Completed 0 9h ingress-nginx-admission-patch-922p4 0/1 Completed 0 9h ingress-nginx-controller-674f66cf96-6wljn 1/1 Running 1 (8h ago) 9h # 修改本机的域名解析
[root@k8s-master aa]# kubectl get service -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller NodePort 10.96.144.127 <none> 80:30886/TCP,443:31360/TCP 57m ingress-nginx-controller-admission ClusterIP 10.96.126.23 <none> 443/TCP 57m
遗留问题:重启后数据库信息将消失
解决方案:持久化保存