K8S中Service详解(二)
Service类型
-
Service的资源清单文件:
---
kind: Service # 资源类型
apiVersion: v1 # 资源版本
metadata: # 元数据
name: service # 资源名称
namespace: dev # 命名空间
spec: # 描述
selector: # 标签选择器,用于确定当前service代理哪些pod
app: nginx
type: ClusterIP # Service类型,指定service的访问方式
clusterIP: None # 虚拟服务的ip地址,设置为None表示创建Headless Service
sessionAffinity: ClientIP # session亲和性,支持ClientIP、None两个选项
ports: # 端口信息
- protocol: TCP
port: 3017 # service端口
targetPort: 5003 # pod端口
-
ClusterIP:
-
默认类型。Kubernetes 为 Service 分配一个虚拟的 IP 地址(ClusterIP),这个 IP 只能在集群内部访问。
-
适用于在集群内部提供服务发现和负载均衡,而不对外公开服务。
-
-
NodePort:
-
在集群的所有节点上打开一个静态端口(NodePort),外部可以通过任何节点的 IP 地址加这个端口来访问 Service。
-
适用于需要从集群外部访问服务的场景,但不像 LoadBalancer 那样需要云服务提供商的支持。
-
-
LoadBalancer:
-
与 NodePort 类似,但会利用云服务提供商的负载均衡器来分发流量。
-
适用于需要高可用性和可扩展性的场景,并且外部访问流量较大时推荐使用。
-
注意:需要云服务商支持,并且可能涉及额外的费用。
-
-
ExternalName:
-
不分配 ClusterIP,而是将服务映射到一个外部的 DNS 名称。
-
适用于将集群外部的服务(如数据库服务)映射到 Kubernetes 集群内部,使得集群内的服务可以像调用集群内的服务一样调用这些外部服务。
-
Service使用
环境准备
首先要安装ipvs:
yum -y install ipset ipvsadm
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
reboot
[root@k8s-master ~]# cat /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
[root@k8s-master ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
[root@k8s-master ~]# echo '{"registry-mirrors": ["https://docker.1ms.run"]}' | sudo tee /etc/docker/daemon.json > /dev/null
[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl restart docker
[root@k8s-master ~]#
[root@k8s-master ~]#
[root@k8s-master ~]# kubectl edit configmap kube-proxy -n kube-system
#修改mode为“ipvs”
configmap/kube-proxy edited
[root@k8s-master ~]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-697d846cf4-79hpj 0/1 Error 0 27d
calico-node-58ss2 0/1 Running 1 27d
calico-node-gc547 0/1 Completed 0 27d
calico-node-hdhxf 0/1 Running 1 27d
coredns-6f6b8cc4f6-5nbb6 0/1 Completed 0 27d
coredns-6f6b8cc4f6-q9rhc 0/1 Completed 0 27d
etcd-k8s-master 1/1 Running 1 27d
kube-apiserver-k8s-master 1/1 Running 1 27d
kube-controller-manager-k8s-master 1/1 Running 1 27d
kube-proxy-7hp6l 1/1 Running 1 27d
kube-proxy-ddhnb 1/1 Running 1 27d
kube-proxy-dwcgd 1/1 Running 1 27d
kube-scheduler-k8s-master 1/1 Running 1 27d
[root@k8s-master ~]# kubectl delete pod -l k8s-app=kube-proxy -n kube-system
pod "kube-proxy-7hp6l" deleted
pod "kube-proxy-ddhnb" deleted
pod "kube-proxy-dwcgd" deleted
[root@k8s-master ~]# kubectl get pod -n kube-system -w
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-697d846cf4-79hpj 1/1 Running 1 27d
calico-node-58ss2 1/1 Running 1 27d
calico-node-gc547 1/1 Running 1 27d
calico-node-hdhxf 1/1 Running 1 27d
coredns-6f6b8cc4f6-5nbb6 1/1 Running 1 27d
coredns-6f6b8cc4f6-q9rhc 1/1 Running 1 27d
etcd-k8s-master 1/1 Running 1 27d
kube-apiserver-k8s-master 1/1 Running 1 27d
kube-controller-manager-k8s-master 1/1 Running 1 27d
kube-proxy-2dlzz 1/1 Running 0 3s
kube-proxy-8pplm 1/1 Running 0 4s
kube-proxy-9sbf5 1/1 Running 0 5s
kube-scheduler-k8s-master 1/1 Running 1 27d
^C[root@k8s-master ~]ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.96.0.1:443 rr
-> 192.168.58.231:6443 Masq 1 1 0
TCP 10.96.0.10:53 rr
-> 10.244.36.69:53 Masq 1 0 0
-> 10.244.36.70:53 Masq 1 0 0
TCP 10.96.0.10:9153 rr
-> 10.244.36.69:9153 Masq 1 0 0
-> 10.244.36.70:9153 Masq 1 0 0
UDP 10.96.0.10:53 rr
-> 10.244.36.69:53 Masq 1 0 0
-> 10.244.36.70:53 Masq 1 0 0
-
创建三个Pod
[root@k8s-master ~]# cat deployment.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: pc-deployment
namespace: test
spec:
replicas: 3
selector:
matchLabels:
app: nginx-pod
template:
metadata:
labels:
app: nginx-pod
spec:
containers:
- name: nginx
image: nginx:1.17.1
ports:
- containerPort: 80
[root@k8s-master ~]# kubectl get pods -n test -w
NAME READY STATUS RESTARTS AGE
pc-deployment-5ffc5bf56c-4w9ps 1/1 Running 0 4m7s
pc-deployment-5ffc5bf56c-d765t 1/1 Running 1 4m7s
pc-deployment-5ffc5bf56c-h969g 1/1 Running 1 4m7s
^C[root@k8s-master ~]# kubectl get pod -n test --show-labels
NAME READY STATUS RESTARTS AGE LABELS
pc-deployment-7b7c9c7cfd-4m68g 1/1 Running 0 101s app=nginx-pod,pod-template-hash=7b7c9c7cfd
pc-deployment-7b7c9c7cfd-ctdtf 1/1 Running 0 101s app=nginx-pod,pod-template-hash=7b7c9c7cfd
pc-deployment-7b7c9c7cfd-gjn74 1/1 Running 0 102s app=nginx-pod,pod-template-hash=7b7c9c7cfd
# 为了方便后面的测试,修改下三台nginx的index.html页面(三台修改的IP地址不一致)
[root@k8s-master ~]# kubectl exec -it pc-deployment-7b7c9c7cfd-4m68g -n test /bin/bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@pc-deployment-7b7c9c7cfd-4m68g:/# echo "`hostname -I` web-01" > /usr/share/nginx/html/index.html
root@pc-deployment-7b7c9c7cfd-4m68g:/# exit
exit
root@pc-deployment-7b7c9c7cfd-ctdtf:/# cat /usr/share/nginx/html/index.html
10.244.169.132 web-01
root@pc-deployment-7b7c9c7cfd-ctdtf:/# echo "`hostname -I` web-02" > /usr/share/nginx/html/index.html
root@pc-deployment-7b7c9c7cfd-ctdtf:/# exit
exit
[root@k8s-master ~]# curl 10.244.169.132
10.244.169.132 web-02
[root@k8s-master ~]# kubectl exec -it pc-deployment-7b7c9c7cfd-gjn74 -n test /bin/bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@pc-deployment-7b7c9c7cfd-gjn74:/# echo "`hostname -I` web-03" > /usr/share/nginx/html/index.html
root@pc-deployment-7b7c9c7cfd-gjn74:/# exit
exit
command terminated with exit code 127
[root@k8s-master ~]# curl 10.244.36.80
10.244.36.80 web-03
ClusterIP类型的Service
[root@k8s-master ~]# vim service-clusterip.yaml
[root@k8s-master ~]# kubectl apply -f service-clusterip.yaml
service/svc-clusterip created
[root@k8s-master ~]# kubectl get svc,pod -n test
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/svc-clusterip ClusterIP 10.96.190.198 <none> 80/TCP 9s
NAME READY STATUS RESTARTS AGE
pod/pc-deployment-7b7c9c7cfd-4m68g 1/1 Running 0 26m
pod/pc-deployment-7b7c9c7cfd-ctdtf 1/1 Running 0 26m
pod/pc-deployment-7b7c9c7cfd-gjn74 1/1 Running 0 26m
[root@k8s-master ~]# kubectl get svc,pod -n test -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/svc-clusterip ClusterIP 10.96.190.198 <none> 80/TCP 49s app=nginx-pod
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/pc-deployment-7b7c9c7cfd-4m68g 1/1 Running 0 27m 10.244.36.81 k8s-node1 <none> <none>
pod/pc-deployment-7b7c9c7cfd-ctdtf 1/1 Running 0 27m 10.244.169.132 k8s-node2 <none> <none>
pod/pc-deployment-7b7c9c7cfd-gjn74 1/1 Running 0 27m 10.244.36.80 k8s-node1 <none> <none>
[root@k8s-master ~]# kubectl describe svc svc-clusterip -n test
Name: svc-clusterip
Namespace: test
Labels: <none>
Annotations: <none>
Selector: app=nginx-pod
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.96.190.198
IPs: 10.96.190.198
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.169.132:80,10.244.36.80:80,10.244.36.81:80
Session Affinity: None
Events: <none>
[root@k8s-master ~]# for ((i=1;i<=6;i++)) do curl 10.96.190.198; done
10.244.36.81 web-01
10.244.36.80 web-03
10.244.169.132 web-02
10.244.36.81 web-01
10.244.36.80 web-03
10.244.169.132 web-02
[root@k8s-master ~]# ipvsadm -Ln | grep -A3 10.96.190.198:80
TCP 10.96.190.198:80 rr
-> 10.244.36.80:80 Masq 1 0 2
-> 10.244.36.81:80 Masq 1 0 2
-> 10.244.169.132:80 Masq 1 0 2
[root@k8s-master ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.96.0.1:443 rr
-> 192.168.58.231:6443 Masq 1 4 0
TCP 10.96.0.10:53 rr
-> 10.244.36.77:53 Masq 1 0 0
-> 10.244.36.78:53 Masq 1 0 0
TCP 10.96.0.10:9153 rr
-> 10.244.36.77:9153 Masq 1 0 0
-> 10.244.36.78:9153 Masq 1 0 0
TCP 10.96.190.198:80 rr
-> 10.244.36.80:80 Masq 1 0 2
-> 10.244.36.81:80 Masq 1 0 2
-> 10.244.169.132:80 Masq 1 0 2
UDP 10.96.0.10:53 rr
-> 10.244.36.77:53 Masq 1 0 0
-> 10.244.36.78:53 Masq 1 0 0
Endpoint类型的Service
-
Endpoint是kubernetes中的一个资源对象,存储在etcd中,用来记录一个service对应的所有pod的访问地址,它是根据service配置文件中selector描述产生的。
-
一个Service由一组Pod组成,这些Pod通过Endpoints暴露出来,Endpoints是实现实际服务的端点集合。换句话说,service和pod之间的联系是通过endpoints实现的。
[root@k8s-master ~]# kubectl get pod -n test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pc-deployment-7b7c9c7cfd-4m68g 1/1 Running 0 87m 10.244.36.81 k8s-node1 <none> <none>
pc-deployment-7b7c9c7cfd-ctdtf 1/1 Running 0 87m 10.244.169.132 k8s-node2 <none> <none>
pc-deployment-7b7c9c7cfd-gjn74 1/1 Running 0 87m 10.244.36.80 k8s-node1 <none> <none>
[root@k8s-master ~]# kubectl get endpoints -n test
NAME ENDPOINTS AGE
svc-clusterip 10.244.169.132:80,10.244.36.80:80,10.244.36.81:80 62m
[root@k8s-master ~]# for ((i=1;i<=6;i++)) do curl 10.96.190.198; done
10.244.36.81 web-01
10.244.36.80 web-03
10.244.169.132 web-02
10.244.36.81 web-01
10.244.36.80 web-03
10.244.169.132 web-02
[root@k8s-master ~]# kubectl delete -f service-clusterip.yaml
service "svc-clusterip" deleted
[root@k8s-master ~]# vim service-clusterip.yaml
[root@k8s-master ~]# kubectl apply -f service-clusterip.yaml
service/svc-clusterip created
[root@k8s-master ~]# kubectl get svc svc-clusterip -n test
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc-clusterip ClusterIP 10.96.237.115 <none> 80/TCP 20s
[root@k8s-master ~]# kubectl get svc svc-clusterip -n test -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
svc-clusterip ClusterIP 10.96.237.115 <none> 80/TCP 26s app=nginx-pod
[root@k8s-master ~]# kubectl describe svc svc-clusterip -n test
Name: svc-clusterip
Namespace: test
Labels: <none>
Annotations: <none>
Selector: app=nginx-pod
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.96.237.115
IPs: 10.96.237.115
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.169.132:80,10.244.36.80:80,10.244.36.81:80
Session Affinity: ClientIP
Events: <none>
[root@k8s-master ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.96.0.1:443 rr
-> 192.168.58.231:6443 Masq 1 4 0
TCP 10.96.0.10:53 rr
-> 10.244.36.77:53 Masq 1 0 0
-> 10.244.36.78:53 Masq 1 0 0
TCP 10.96.0.10:9153 rr
-> 10.244.36.77:9153 Masq 1 0 0
-> 10.244.36.78:9153 Masq 1 0 0
TCP 10.96.237.115:80 rr persistent 10800
-> 10.244.36.80:80 Masq 1 0 0
-> 10.244.36.81:80 Masq 1 0 0
-> 10.244.169.132:80 Masq 1 0 0
UDP 10.96.0.10:53 rr
-> 10.244.36.77:53 Masq 1 0 0
-> 10.244.36.78:53 Masq 1 0 0
[root@k8s-master ~]# for ((i=1;i<=6;i++)) do curl 10.96.237.115; done
10.244.36.81 web-01
10.244.36.81 web-01
10.244.36.81 web-01
10.244.36.81 web-01
10.244.36.81 web-01
10.244.36.81 web-01