ELK架构监控MySQL慢日志
目录
一、架构概述
二、安装部署
三、Filebeat配置
四、Logstash配置
一、架构概述
本文使用将使用filebeat收集mysql日志信息,发送到redis中缓存,由logstash从redis中取出,发送es中存储,再从kibana中展示。
二、安装部署
ELK各中间件的安装部署参考章节:审计日志>ELK日志收集,此处不再赘述。
三、Filebeat配置
filebeat.inputs:
- type: log
enabled: true
paths:
- /usr/local/mysql-8.2.0/mysql_slow.log
scan_frequency: 10s
multiline.pattern: '^\s*# Time:'
multiline.negate: true
multiline.match: after
output.redis:
enabled: true
hosts: ["192.168.122.227:6379","192.168.122.237:6379","192.168.122.238:6379"]
key: "uap-mysql-slow-log"
datatype: list
password: "Redis@123456"
db: 0
codec: [ json ]
loadbalance: true
logging.level: info
logging.to_files: true
logging.files:
path: /opt/module/filebeat-8.11.0
name: filebeat.log
四、Logstash配置
# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.
# 从redis里面拿日志数据
input {
redis {
batch_count => 1 #返回的事件数量,此属性仅在list模式下起作用。
data_type => "list" #logstash redis插件工作方式
key => "ipu-cbs-mysql-slow-log" #监听的键值
host => "192.168.122.227" #redis地址
port => 6379 #redis端口号
password => "Redis@123456" #如果有安全认证,此项为密码
db => 0 #redis数据库的编号
threads => 1 #启用线程数量
tags => ["uap-mysql-slow-log-159"]
}
redis {
batch_count => 1 #返回的事件数量,此属性仅在list模式下起作用。
data_type => "list" #logstash redis插件工作方式
key => "ipu-cbs-mysql-slow-log" #监听的键值
host => "192.168.122.237" #redis地址
port => 6379 #redis端口号
password => "Redis@123456" #如果有安全认证,此项为密码
db => 0 #redis数据库的编号
threads => 1 #启用线程数量
tags => ["uap-mysql-slow-log-159"]
}
redis {
batch_count => 1 #返回的事件数量,此属性仅在list模式下起作用。
data_type => "list" #logstash redis插件工作方式
key => "ipu-cbs-mysql-slow-log" #监听的键值
host => "192.168.122.238" #redis地址
port => 6379 #redis端口号
password => "Redis@123456" #如果有安全认证,此项为密码
db => 0 #redis数据库的编号
threads => 1 #启用线程数量
tags => ["uap-mysql-slow-log-159"]
}
}
filter {
if "uap-mysql-slow-log-159" in[tags] {
mutate {
gsub => [
"message", "# ", "",
"message", "\n", " ",
"message", "\s*@\s*", "@"
]
}
grok {
match => { "message" => "Time: %{TIMESTAMP_ISO8601:log_time} User@Host: %{DATA:user_host} Id:\s+%{NUMBER:id:int} Query_time:\s+%{NUMBER:query_time:float}\s+Lock_time:\s+%{NUMBER:lock_time:float}\s+Rows_sent:\s+%{NUMBER:rows_sent:int}\s+Rows_examined:\s+%{NUMBER:rows_examined:int} use\s+%{DATA:database};\s+SET\s+timestamp=%{NUMBER:timestamp}; %{GREEDYDATA:sql}" }
}
if [sql] {
grok {
match => { "sql" => "\/\* %{GREEDYDATA:comment} \*\/ %{GREEDYDATA:slow_sql}" }
}
}
if ![slow_sql] {
mutate {
add_field => { "slow_sql" => "%{sql}"}
}
}
# 将logdate的值赋值给@timestamp
date {
match => [ "log_time", "ISO8601" ]
target => "@timestamp"
timezone =>"+08:00"
}
mutate {
remove_field => ["timestamp","input","ecs","log","@version","agent","comment","event","log_time","sql"]
}
}
}
output {
if "uap-mysql-slow-log-159" in [tags] {
if "tm_aseanbank_tst" in [database]{
elasticsearch {
hosts => ["https://192.168.122.118:9200","https://192.168.122.119:9200","https://192.168.122.120:9200"]
index => "ipu-cbs-mysql-slow-log-test"
user => "elastic"
password => "elastic"
ssl_certificate_verification => true
truststore => "/opt/module/logstash-8.11.0/config/certs/http.p12"
truststore_password => "123456"
}
}else if "tm_aseanbank_dev" in [database] {
elasticsearch {
hosts => ["https://192.168.122.118:9200","https://192.168.122.119:9200","https://192.168.122.120:9200"]
index => "ipu-cbs-mysql-slow-log-dev"
user => "elastic"
password => "elastic"
ssl_certificate_verification => true
truststore => "/opt/module/logstash-8.11.0/config/certs/http.p12"
truststore_password => "123456"
}
}
}
}
注意事项: 上面用的ipu-cbs-mysql-slow-log-dev 和 ipu-cbs-mysql-slow-log-test 两个索引,如果es中没有配置索引缺失自动生成,那么需要手动在es中生成这两个索引,索引的字段没有要求,缺的字段它在存储数据时会自行添加。