Prometheus监控k8s环境构建
传统架构中比较流行的监控工具有 Zabbix、Nagios 等,这些监控工具对于 Kubernetes 这类云平台的监控不是很友好,特别是当 Kubernetes 集群中有了成千上万的容器后更是如此,本章节学习下一代的云原生监控平台---Prometheus。
一、基于kubernetes的Prometheus介绍
1.环境简介
node-exporter + prometheus + grafana 是一套非常流行的 Kubernetes 监控方案。它们的功能如下:
- node-exporter:节点级指标导出工具,可以监控节点的 CPU、内存、磁盘、网络等指标,并暴露Metrics 接口。
- Prometheus:时间序列数据库和监控报警工具,可以抓取 Cadvisor 和 node-exporter 暴露的Metrics 接口,存储时序数据,并提供 PromQL 查询语言进行监控分析和报警
- Grafana:图表和 Dashboard 工具,可以査询 Prometheus 中的数据,并通过图表的方式直观展示Kubernetes 集群的运行指标和状态。
2.监控流程
(1)在Kubernetes集群的每个节点安装Cadvisor和node-exporter,用于采集容器和节点级指标数据(2)部署 Prometheus,配置抓取 Cadvisor 和 node-exporter 的Metrics 接口,存储 containers 和nodes 的时序数据
(3)使用 Grafana构建监控仪表盘,选择 Prometheus 作为数据源,编写 PromQL 查询语句,展示 K8S 集群的 CPU 使用率、内存使用率、网络流量等监控指标。
(4)根据监控结果,可以设置Prometheus 的报警规则,当监控指标超过阈值时发送报警信息。这套方案能够全面监控 Kubernetes 集群的容器和节点,通过 Metrics 指标和仪表盘直观反映集群状态,并实现自动报警,非常适合 K8S 环境下微服务应用的稳定运行。
具体实现方案如下:
- node-exporter:在每个节点也作为Daemonset 运行,采集节点 Metrics。
- Prometheus:部署 Prometheus operator 实现,作为 Deployment 运行,用于抓取 Metrics 和报警。
- Grafana:部署 Grafana 0perator 实现,用于仪表盘展示。
3.Kubernetes 监控指标
K8S 本身的监控指标:
- CPU 利用率:包括节点 CPU 利用率、Pod CPU 利用率、容器 CPU 利用率等,用于监控 CPU 资源使用情况。
- 内存利用率:包括节点内存利用率、Pod 内存利用率、容器内存利用率等,用于监控内存资源使用情况。
- 网络流量:节点网络流量、Pod 网络流量、容器网络流量,用于监控网络收发包大小和带宽利用率。
- 磁盘使用率:节点磁盘使用率,用于监控节点磁盘空间使用情况。
- Pod 状态:Pod 的 Running、waiting、succeeded、Failed 等状态数量,用于监控 Pod 运行状前
- 节点状态:节点的 Ready、NotReady 和 Unreachable 状态数量,用于监控节点运行状态
- 容器重启次数:单个容器或 Pod 内所有容器的重启次数,用于监控容器稳定性。
- API 服务指标:Kubernetes API Server 的请求 LATENCY、请求 QPS、错误码数量等,用于监控API Server 性能。
- 集群组件指标:etcd、kubelet、kube-proxy 等组件的运行指标,用于监控组件运行状态
这些都是 Kubernetes 集群运行状态的关键指标,通过 Prometheus 等工具可以进行收集和存储,然后在 Grafana 中设计相应的 Dashboard 进行可视化展示。当这些指标超出正常范围时,也可以根据阈值设置报警,保证 Kubernetes 集群和服务的稳定运行。
例如:
- CPU 利用率超过 80%报警
- 内存利用率超过 90%报警
- 网络流量/磁盘空间突增报警
- Pod/节点 NotReady 状态超过 10%报警
- API Server 请求 LATENCY 超过 200ms 报警
- etcd 节点 Down 报警等等,
这些报警规则的设置需要根据集群大小和服务负载进行评估。
二、Prometheus 的安装
1.从 Github 克隆项目分支
[root@k8s-master ~]# git clone -b
release-0.10 https://github.com/prometheus-operator/kube-prometheus.git
2.安装Prometheus Operator
Prometheus operator 是 Core0s 开源的项目,它提供了一种 Kubernetes-native 的方式来运行和管理 Prometheus。Prometheus operator 可以自动创建、配置和管理 Prometheus 实例,并将其与Kubernetes 中的服务发现机制集成在一起,从而实现对 Kubernetes 集群的自动监控。
Prometheus 和Prometheus Operator的区别如下:
Prometheus是一种开源的监控系统,用于记录各种指标,并提供査询接口和告警机制。而PrometheusOperator 则是一种用于在 Kubernetes 上运行和管理 Prometheus 的解决方案。相比于传统方式手动部署 Prometheus,Prometheus Operator 可以自动创建、配置和管理 Prometheus 实例,并将其与Kubernetes 中的服务发现机制集成在一起,大幅简化了我们的工作量。
安装方法如下:
[root@master ~]# cd kube-prometheus/
[root@master kube-prometheus]# ku apply --server-side -f manifests/setup
customresourcedefinition.apiextensions.k8s.io/alertmanagerconfigs.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/probes.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com serverside-applied
namespace/monitoring serverside-applied
[root@master kube-prometheus]# ku apply --server-side -f manifests/setup
备注:
--server-side
这个特性主要目标是把逻辑从 kubectl apply 移动到 kube-apiserver 中,这可以修复当前遇到的很多有关所有权冲突的问题
可以直接通过 API 完成声明式配置的操作,而无需依赖于特定的 kubectl apply 命令
如果要删除Prometheus operator,可以使用下面的命令kubectl delete --ignore-not-found=true -f manifests/setup.
备注:
prometheus-operator 的作用主要是用来创建prometheus 的相关资源以及监视与管理它创建出来的资源对象。
3.Operator 容器启动起来后安装Prometheus stack
[root@master kube-prometheus]# ku apply --server-side -f manifests/
备注:
删除Prometheus stack
kubectl delete --ignore-not-found=true -f manifests/ -f manifests/setup
备注:
kube-prometheus-stack 是一个全家桶,提供监控告警组件alert-manager、grafana 等子组件。
4. 查看Prometheus 容器的状态
[root@master kube-prometheus]# ku get pod -n monitoring
NAME READY STATUS RESTARTS AGE
alertmanager-main-0 2/2 Running 0 4m4s
alertmanager-main-1 1/2 ImagePullBackOff 0 4m4s
alertmanager-main-2 2/2 Running 0 4m4s
blackbox-exporter-6b79c4588b-q8jxx 3/3 Running 0 4m12s
grafana-7fd69887fb-j9d2n 0/1 ImagePullBackOff 0 4m11s
kube-state-metrics-55f67795cd-rf6s7 2/3 ImagePullBackOff 0 4m11s
node-exporter-f9dtj 2/2 Running 0 4m11s
node-exporter-hcxgp 2/2 Running 0 4m11s
node-exporter-jshsm 2/2 Running 0 4m11s
prometheus-adapter-5565cc8d76-6stvc 1/1 Running 0 4m10s
prometheus-adapter-5565cc8d76-sp6vj 1/1 Running 0 4m10s
prometheus-k8s-0 1/2 ImagePullBackOff 0 4m4s
prometheus-k8s-1 0/2 PodInitializing 0 4m4s
prometheus-operator-6dc9f66cb7-bhhrh 2/2 Running 0 4m10s
5.查看servicemonitors
servicemonitors 定义了如何监控一组动态的服务,使用标签选择来定义哪些 Service 被选择进行监控。这可以让团队制定一个如何暴露监控指标的规范,然后按照这些规范自动发现新的服务,而无需重新配置。
为了让 Prometheus 监控 Kubernetes 内的任何应用,需要存在一个 Endpoints 对象,Endpoints对象本质上是 IP 地址的列表,通常 Endpoints 对象是由 Service 对象来自动填充的,Service 对象通过标签选择器匹配 Pod,并将其添加到 Endpoints 对象中。一个 Service 可以暴露一个或多个端口,这些端口由多个 Endpoints 列表支持,这些端点一般情况下都是指向一个 Pod。
Prometheus operator 引入的这个 ServiceMonitor 对象就会发现这些 Endpoints 对象,并配置 Prometheus 监控这些 Pod.ServiceMonitorspec的endpoints部分就是用于配置这些 Endpoints
的哪些端口将被 scrape 指标的。
Prometheus Operator 使用ServiceMonitor 管理监控配置。
ServiceMonitor 的创建方法如下:
[root@master kube-prometheus]# ku get servicemonitors -A
NAMESPACE NAME AGE
monitoring alertmanager-main 4m49s
monitoring blackbox-exporter 4m49s
monitoring coredns 4m48s
monitoring grafana 4m48s
monitoring kube-apiserver 4m48s
monitoring kube-controller-manager 4m48s
monitoring kube-scheduler 4m48s
monitoring kube-state-metrics 4m48s
monitoring kubelet 4m48s
monitoring node-exporter 4m48s
monitoring prometheus-adapter 4m47s
monitoring prometheus-k8s 4m47s
monitoring prometheus-operator 4m47s
6.修改grafana的 service 的类型为 NodePort
注意:默认的 type为ClusterIp 的类型
[root@master kube-prometheus]# ku edit svc grafana -n monitoring
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2024-09-18T00:51:14Z"
labels:
app.kubernetes.io/component: grafana
app.kubernetes.io/name: grafana
app.kubernetes.io/part-of: kube-prometheus
app.kubernetes.io/version: 8.3.3
name: grafana
namespace: monitoring
resourceVersion: "9661"
uid: 4375052f-17cd-4622-8e3f-fcc2ebb1e061
spec:
clusterIP: 10.96.56.246
clusterIPs:
- 10.96.56.246
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: http
nodePort: 32082
port: 3000
protocol: TCP
targetPort: http
selector:
app.kubernetes.io/component: grafana
app.kubernetes.io/name: grafana
app.kubernetes.io/part-of: kube-prometheus
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
[root@master kube-prometheus]# ku get svc grafana -n monitoring
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana NodePort 10.96.56.246 <none> 3000:32082/TCP 15m
7.访问grafana
http://192.168.10.101:32082
注意:
默认的登录账号密码为 admin/admin,第一次登陆会提示修改密码,不想修改可以点击 skip 跳过
8.修改Prometheus 的Service 类型
将 type 类型修改为 NodePort,默认的是 ClusterIP
[root@master kube-prometheus]# ku edit svc prometheus-k8s -n monitoring
# Please edit the object below. Lines beginning with a '#' will be ignored,
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2024-09-18T00:51:15Z"
labels:
app.kubernetes.io/component: prometheus
app.kubernetes.io/instance: k8s
app.kubernetes.io/name: prometheus
app.kubernetes.io/part-of: kube-prometheus
app.kubernetes.io/version: 2.32.1
name: prometheus-k8s
namespace: monitoring
resourceVersion: "9770"
uid: 3f6d5cfa-2f10-4426-aad2-76523ed68b48
spec:
clusterIP: 10.104.223.81
clusterIPs:
- 10.104.223.81
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: web
nodePort: 32370
port: 9090
protocol: TCP
targetPort: web
- name: reloader-web
port: 8080
protocol: TCP
targetPort: reloader-web
selector:
app.kubernetes.io/component: prometheus
app.kubernetes.io/instance: k8s
app.kubernetes.io/name: prometheus
app.kubernetes.io/part-of: kube-prometheus
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
type: NodePort
status:
loadBalancer: {}
[root@master kube-prometheus]# ku get svc prometheus-k8s -n monitoring
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
prometheus-k8s NodePort 10.104.223.81 <none> 9090:32370/TCP,8080:32140/TCP 19m
9.访问Prometheus
http://192.168.10.101:32370
查看监控目标
备注:
在 Prometheus 中一条告警规则有三个状态:
inactive:还未被触发:
pending:已经触发,但是还未达到for 设定的时间:firing:
触发且达到设定时间。
三、配置 Grafana Dashbord
1.添加数据源
注意:在本案例中,grafana已经有了 Prometheus 的数据源,(1)、(2)、(3)步骤可以省去
(1)添加数据源
单击 Dashboards 按钮(四方块图标),选择“Add your first data source"
(2)数据源选择 Prometheus
鼠标放到 Prometheus 上,选择最右侧的“Select”按钮
(3)配置数据源
HTTP 配置项下的 URL 填写“http://prometheus-k8s:9098”,这里的prometheus 是 K8s 集群内的 Service 名,也可以使用 IP 地址代替
然后点击页面底部的“Save & Test”按钮,保存并确定测试通过。
2.通过 Node id 导入监控模板
单击首页左侧搜索框下面的+的按钮。选择 import 按钮,输入监控模板 id:13105。单击 Load 按钮加载即可,最后单击 Import 按钮导入
完成上述步骤后,可以查看到Node节点在Dashbord监控页面展示情况
其他模板:
可以去官网查找更丰富的模板
https://grafana.com/grafana/dashboards/
常见的模板有
Kubernetes cluster
7249
Docker Registry
9621
Docker and system monitoring
893
K8s for Prometheus Dashboard 20211010 中文版
13105
Kubernetes Pods
4686
Linux Stats with Node Exporter
14731
四、添加云原生监控项
通过前面的学习,我们对 Prometheus 相关知识做了一定的了解,也理解了云原生和非云原生应用的监控流程。但是,在安装Prometheus 的时候,并没有对 ETCD 集群进行监控,ETCD 集群时 kubernetes的数据库,掌握着 kubernetes 最核心的数据,ETCD 的状态和性能直接影响 kubernetes 集群的状态所以对 ETCD 集群的监控时非常重要的。接下来通过 ETCD 的监控部署,老学习一下如何对云原生应用进行监控的。
kubernetes 有一个Metrics 接口,可以直接获取内部的运行信息,比如 ETCD 的内部数据可以通过2379 的 Metrics 端口得到,和之前的 kubelet 类似。不同的是,ETCD 对外的接口必须通过 HTTPS 访问所以在请求时需要加上对应的证书。
1.创建ETCD Service
[root@master kube-prometheus]# vim etcd-svc.yaml
apiVersion: v1
kind: Endpoints
metadata:
labels:
app: etcd-prom
name: etcd-prom
namespace: kube-system
subsets:
- addresses:
- ip: 192.168.10.101
# - ip: 192.168.10.102
# - ip: 192.168.10.103
ports:
- name: https-metrics
port: 2379 # etcd端口
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
labels:
app: etcd-prom
name: etcd-prom
namespace: kube-system
spec:
ports:
- name: https-metrics
port: 2379
protocol: TCP
targetPort: 2379
type: ClusterIP
备注:
kubeadm安装的k8s只有master节点上安装了etcd。
2.创建ETCD Service
[root@master kube-prometheus]# ku create -f etcd-svc.yaml
endpoints/etcd-prom created
service/etcd-prom created
3.查看创建的ETCD Service
[root@master kube-prometheus]# ku get svc -n kube-system etcd-prom
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
etcd-prom ClusterIP 10.109.121.206 <none> 2379/TCP 2m18s
4.通过ClusterIP访问测试
[root@master kube-prometheus]# curl --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/healthcheck-client.crt --key /etc/kubernetes/pki/etcd/healthcheck-client.key https://192.168.10.101:2379/metrics
备注:
ServiceMonitor需要配置证书才能请求ETCD的Metrics接口。
5.创建ETCD证书的Secret
ku -n monitoring create secret generic etcd-certs --from-file=/etc/kubernetes/pki/etcd/ca.crt --from-file=/etc/kubernetes/pki/etcd/healthcheck-client.crt --from-file=/etc/kubernetes/pki/etcd/healthcheck-client.key
备注:
Secret 是Kubernetes 中的一种资源对象类型,用来保护感信息,例如密码、令牌等信息,将这些信息放在 Secret 中是比较安全的做法。
6. 修改 prometheus 资源,应用上一步创建的 secret
[root@master kube-prometheus]# ku edit prometheus k8s -n monitoring
# Please edit the object below. Lines beginning with a '#' will be ignored,
# reopened with the relevant failures.
#
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
creationTimestamp: "2024-09-18T00:51:14Z"
generation: 1
labels:
app.kubernetes.io/component: prometheus
app.kubernetes.io/instance: k8s
app.kubernetes.io/name: prometheus
app.kubernetes.io/part-of: kube-prometheus
app.kubernetes.io/version: 2.32.1
name: k8s
namespace: monitoring
resourceVersion: "9754"
uid: 8b7f3f06-44d8-4604-83b5-acccc1a25661
spec:
alerting:
alertmanagers:
- apiVersion: v2
name: alertmanager-main
namespace: monitoring
port: web
enableFeatures: []
externalLabels: {}
image: quay.io/prometheus/prometheus:v2.32.1
nodeSelector:
kubernetes.io/os: linux
podMetadata:
labels:
app.kubernetes.io/component: prometheus
app.kubernetes.io/instance: k8s
app.kubernetes.io/name: prometheus
app.kubernetes.io/part-of: kube-prometheus
app.kubernetes.io/version: 2.32.1
podMonitorNamespaceSelector: {}
podMonitorSelector: {}
probeNamespaceSelector: {}
probeSelector: {}
replicas: 2
resources:
requests:
memory: 400Mi
ruleNamespaceSelector: {}
ruleSelector: {}
securityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
serviceAccountName: prometheus-k8s
serviceMonitorNamespaceSelector: {}
serviceMonitorSelector: {}
version: 2.32.1
secrets:
- etcd-certs
备注:
kubectl edit prometheus k8s -n monitoring
prometheus :资源对象类型
k8s :资源对象名称
7.查看证书
[root@master kube-prometheus]# ku -n monitoring create secret generic etcd-certs --from-file=/etc/kubernetes/pki/etcd/ca.crt --from-file=/etc/kubernetes/pki/etcd/healthcheck-client.crt --from-file=/etc/kubernetes/pki/etcd/healthcheck-client.key
secret/etcd-certs created
[root@master kube-prometheus]# ku -n monitoring exec -it prometheus-k8s-0 -c prometheus -- sh
/prometheus $ cd /etc/prometheus/secrets/etcd-certs/
/etc/prometheus/secrets/etcd-certs $ ls
ca.crt healthcheck-client.crt healthcheck-client.key
备注:
需要等一小会才会看见这三个证书文件
8. 创建 ETCD ServiceMonitor
root@master kube-prometheus]# ku get pod -A | grep etcd
kube-system etcd-master 1/1 Running 2 (83m ago) 29d
[root@master kube-prometheus]# ku expose pod etcd-master -n kube-system --name=etcd-k8s --port=2379 --target-port 2379
service/etcd-k8s exposed
9.创建service,并给 port 添加 name
给名为 etcd-k8s 的 svc,添加 name 的声明
[root@master kube-prometheus]# ku edit svc etcd-k8s -n kube-system
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2024-09-18T01:59:17Z"
labels:
component: etcd
tier: control-plane
name: etcd-k8s
namespace: kube-system
resourceVersion: "16197"
uid: e1967be2-3b2f-4a63-84df-0dca834aa15d
spec:
clusterIP: 10.108.207.84
clusterIPs:
- 10.108.207.84
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- port: 2379
protocol: TCP
targetPort: 2379
name: api
selector:
component: etcd
tier: control-plane
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
10.创建 Etcd-ServiceMonitor.yaml
[root@master kube-prometheus]# vim Etcd-ServiceMonitor.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: etcd-k8s
namespace: monitoring
labels:
k8s-app: etcd-k8s
spec:
jobLabel: k8s-app
endpoints:
- port: api
interval: 30s
scheme: https
tlsConfig:
caFile: /etc/prometheus/secrets/etcd-certs/ca.crt
certFile: /etc/prometheus/secrets/etcd-certs/healthcheck-client.crt
keyFile: /etc/prometheus/secrets/etcd-certs/healthcheck-client.key
insecureSkipVerify: true
selector:
matchLabels:
component: etcd
namespaceSelector:
matchNames:
- kube-system
11.应用Etcd-ServiceMonitor.yaml
[root@master kube-prometheus]# ku apply -f Etcd-ServiceMonitor.yaml
servicemonitor.monitoring.coreos.com/etcd-k8s created
12.查看监控目标
多了一个ETCD
12.在 grafana 中添加 ETCD 监控模板
模板 ID:9618
五、添加非云原生监控项
上一节对云原生应用 ETCD 进行了监控,可以看到符合云原生要素开发的应用,自带的都有一个Metrics 接口,可以让监控平台直接采集到监控的数据。而非云原生应用(如 MySOL、Redis、Kafka 等)没有暴露 Metrics 接口,所以可以使用对应的 Exporter 采集数据,并暴露Metrics 接口。本案例使用MySQL作为一个测试用例,演示如何使用 Exporter 监控非云原生应用。
1.在Kubernetes 中安装一个mysql
[root@master kube-prometheus]# ku create deploy mysql --image=mysql:5.7.23
deployment.apps/mysql created
2. 设置 mysql密码
[root@master kube-prometheus]# ku set env deploy/mysql MYSQL_ROOT_PASSWORD=pwd123
deployment.apps/mysql env updated
备注:
需要先设置密码,mysql 的 pod 状态才能正常
3.查看pod
[root@master kube-prometheus]# ku get pod
NAME READY STATUS RESTARTS AGE
mysql-dcf858795-l46x4 1/1 Running 0 24s
4.创建service,暴露mysql端口
[root@master kube-prometheus]# ku expose deployment mysql --type NodePort --port=3306
service/mysql exposed
[root@master kube-prometheus]# ku get svc -l app=mysql
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mysql NodePort 10.99.132.36 <none> 3306:32148/TCP 29s
5.访问测试
[root@master kube-prometheus]# yum -y install mysql
[root@master kube-prometheus]# mysql -u root -ppwd123 -h 192.168.10.101 -P 32148
6.设置权限
MySQL [(none)]> grant all on *.* to exporter@'%' identified by 'exporter';
Query OK, 0 rows affected, 1 warning (0.03 sec)
7.配置mysql exporter采集mysql监控文件
[root@master kube-prometheus]# vim mysql-exporter.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-exporter
namespace: monitoring
spec:
replicas: 1
selector:
matchLabels:
k8s-app: mysql-exporter
template:
metadata:
labels:
k8s-app: mysql-exporter
spec:
containers:
- name: mysql-exporter
image: registry.cn-beijing.aliyuncs.com/dotbalo/mysqld-exporter
env:
- name: DATA_SOURCE_NAME
value: "exporter:exporter@(mysql.default:3306)/"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9104
---
apiVersion: v1
kind: Service
metadata:
name: mysql-exporter
namespace: monitoring
labels:
k8s-app: mysql-exporter
spec:
type: ClusterIP
selector:
k8s-app: mysql-exporter
ports:
- name: api
port: 9104
protocol: TCP
[root@master kube-prometheus]# ku create -f mysql-exporter.yaml
deployment.apps/mysql-exporter created
service/mysql-exporter created
[root@master kube-prometheus]# ku get -f mysql-exporter.yaml
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/mysql-exporter 1/1 1 1 68s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/mysql-exporter ClusterIP 10.106.50.50 <none> 9104/TCP 68s
8.测试能否获取metrics数据
[root@master kube-prometheus]# curl 10.106.50.50:9104/metrics | tail -l
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 130k 0 130k 0 0 2191k 0 --:--:-- --:--:-- --:--:-- 2215k
# TYPE process_virtual_memory_max_bytes gauge
process_virtual_memory_max_bytes 1.8446744073709552e+19
# HELP promhttp_metric_handler_requests_in_flight Current number of scrapes being served.
# TYPE promhttp_metric_handler_requests_in_flight gauge
promhttp_metric_handler_requests_in_flight 1
# HELP promhttp_metric_handler_requests_total Total number of scrapes by HTTP status code.
# TYPE promhttp_metric_handler_requests_total counter
promhttp_metric_handler_requests_total{code="200"} 0
promhttp_metric_handler_requests_total{code="500"} 0
promhttp_metric_handler_requests_total{code="503"} 0
9.配置ServiceMonitor
[root@master kube-prometheus]# vim mysql-sm.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: mysql-exporter
namespace: monitoring
labels:
k8s-app: mysql-exporter
namespace: monitoring
spec:
jobLabel: k8s-app
endpoints:
- port: api
interval: 30s
scheme: http
selector:
matchLabels:
k8s-app: mysql-exporter
namespaceSelector:
matchNames:
- monitoring
10.创建这个ServiceMonitor
[root@master kube-prometheus]# ku create -f mysql-sm.yaml
servicemonitor.monitoring.coreos.com/mysql-exporter created
11.在 prometheus 查看监控目标中是否出现了 mysql
12.在grafana中添加mysql监控模板
模板ID:6239
六、添加外部主机
https://github.com/prometheus-community
下载 windows 版 exporter 监控程序并安装
安装后会自动开启 9182 的端口给 prometheus
1.Linux 被监控端安装 node exporter
(1)安装node exporter
[root@localhost ~]# hostnamectl set-hostname agent01
[root@localhost ~]# bash
[root@agent01 ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/’/etc/selinux/config
[root@agent01 ~]# setenforce 0
[root@agent01 ~]# systemctl disable firewalld
[root@agent01 ~]# systemctl stop firewalld
[root@agent01 ~]# tar zxvf node exporter-1.7.0.linux-amd64.tar.gz
[root@agent01 ~]# mv node exporter-1.7.0.linux-amd64 /usr/local/node exporter
(2)添加服务为系统服务
[root@agent01 ~]# vim /usr/lib/systemd/system/node exporter.service
[Unit]
Description=node exporter
After=network.target
[Service]
ExecStart=/usr/local/node exporter/node exporterRestart=on-failure
[Install]
WantedBy=multi-user.target
(3)启动
[root@agent01 ~# systemctl daemon-reload
[root@agent01 ~]# systemctl enable node exporter
[root@agent01 ~]# systemctl start node exporter
(4)查看端口
[root@agent01 ~]# netstat -anpt grep 9100
tcp6 0 0 :::9100 :::* LISTEN
6352/node exporter
2.windows 被监控端安装 windows exporter
此步骤比较简单,此处不在演示,将windows exporter-0.20.0-amd64.msi 直接安装到 windows 系统中即可,无需任何设置。
3.创建一个 Prometheus 的静态配置,用来指定外部主机
[root@master kube-prometheus]# vim prometheus-additional.yaml
- job_name: 'WindowsServerMonitor'
static_configs:
- targets:
- "192.168.10.50:9182"
labels:
server_type: "windows"
relabel_configs:
- source_labels: [__address__]
target_label: instance
备注
Linux 主机的exporter 进程监听的是TCP 9100 端口,windows 主机的exporter 进程监听的TCP9182 端口。
4.通过上面的文件创建一个 secret,作为 prometheus 的静态配置
[root@master kube-prometheus]# ku create secret generic additional-configs --from-file=prometheus-additional.yaml -n monitoring
secret/additional-configs created
5.查看是否创建成功
[root@k8s-master ~]# kubectl get secret additional-configs -n monitoring
NAME TYPE DATA AGE
additional-configs opaque 1 16m
6.修改 prometheus 的配置
[root@master kube-prometheus]# ku edit prometheus -n monitoring
# Please edit the object below. Lines beginning with a '#' will be ignored,
# reopened with the relevant failures.
#
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
creationTimestamp: "2024-09-18T00:51:14Z"
generation: 2
labels:
app.kubernetes.io/component: prometheus
app.kubernetes.io/instance: k8s
app.kubernetes.io/name: prometheus
app.kubernetes.io/part-of: kube-prometheus
app.kubernetes.io/version: 2.32.1
name: k8s
namespace: monitoring
resourceVersion: "15352"
uid: 8b7f3f06-44d8-4604-83b5-acccc1a25661
spec:
alerting:
alertmanagers:
- apiVersion: v2
name: alertmanager-main
namespace: monitoring
port: web
enableFeatures: []
externalLabels: {}
image: quay.io/prometheus/prometheus:v2.32.1
nodeSelector:
kubernetes.io/os: linux
podMetadata:
labels:
app.kubernetes.io/component: prometheus
app.kubernetes.io/instance: k8s
app.kubernetes.io/name: prometheus
app.kubernetes.io/part-of: kube-prometheus
app.kubernetes.io/version: 2.32.1
podMonitorNamespaceSelector: {}
podMonitorSelector: {}
probeNamespaceSelector: {}
probeSelector: {}
replicas: 2
resources:
requests:
memory: 400Mi
ruleNamespaceSelector: {}
ruleSelector: {}
secrets:
- etcd-certs
securityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
serviceAccountName: prometheus-k8s
serviceMonitorNamespaceSelector: {}
serviceMonitorSelector: {}
version: 2.32.1
additionalScrapeConfigs:
key: prometheus-additional.yaml
name: additional-configs
optional: true
7.验证是否自动加载成功
[root@master kube-prometheus]# ku get secret -n monitoring
NAME TYPE DATA AGE
additional-configs Opaque 1 4m48s
alertmanager-main Opaque 1 125m
alertmanager-main-generated Opaque 1 125m
alertmanager-main-tls-assets-0 Opaque 0 125m
alertmanager-main-token-hh944 kubernetes.io/service-account-token 3 125m
blackbox-exporter-token-x5k8h kubernetes.io/service-account-token 3 125m
default-token-7tcvm kubernetes.io/service-account-token 3 127m
etcd-certs Opaque 3 63m
grafana-config Opaque 1 125m
grafana-datasources Opaque 1 125m
grafana-token-dt9pd kubernetes.io/service-account-token 3 125m
kube-state-metrics-token-sfw9h kubernetes.io/service-account-token 3 125m
node-exporter-token-rj525 kubernetes.io/service-account-token 3 125m
prometheus-adapter-token-l9v2p kubernetes.io/service-account-token 3 125m
prometheus-k8s Opaque 1 125m
prometheus-k8s-tls-assets-0 Opaque 0 125m
prometheus-k8s-token-lc5m4 kubernetes.io/service-account-token 3 125m
prometheus-k8s-web-config Opaque 1 125m
prometheus-operator-token-9gcrn kubernetes.io/service-account-token 3 125m
8.查看创建出来的secret
[root@master kube-prometheus]# ku get secret additional-configs -n monitoring -oyaml
apiVersion: v1
data:
prometheus-additional.yaml: LSBqb2JfbmFtZTogJ1dpbmRvd3NTZXJ2ZXJNb25pdG9yJwogIHN0YXRpY19jb25maWdzOgogICAgLSB0YXJnZXRzOgogICAgICAtICIxOTIuMTY4LjEwLjUwOjkxODIiCiAgICAgIGxhYmVsczoKICAgICAgICBzZXJ2ZXJfdHlwZTogIndpbmRvd3MiCiAgcmVsYWJlbF9jb25maWdzOgogICAgLSBzb3VyY2VfbGFiZWxzOiBbX19hZGRyZXNzX19dCiAgICAgIHRhcmdldF9sYWJlbDogaW5zdGFuY2UK
kind: Secret
metadata:
creationTimestamp: "2024-09-18T02:52:24Z"
name: additional-configs
namespace: monitoring
resourceVersion: "20576"
uid: 7c53888b-b887-48a8-93f4-783a192b6f03
type: Opaque
9.查看prometheus 监控目标
10.在 grafana 中添加监控模板
模板 ID:12566
此模板是针对 windows 服务器的。
用户也可以选择其他对应的模板进行创建。
12633:针对 Linux 的节点进行监控的模板
14694:针对 windows 的节点进行监控的模板