docker快速安装ELK
一、创建elk目录
创建/elk/elasticsearch/data/目录
mkdir -p /usr/local/share/elk/elasticsearch/data/
创建/elk/logstash/pipeline/目录
mkdir -p /usr/local/share/elk/logstash/pipeline/
创建/elk/kibana/conf/目录
mkdir -p /usr/local/share/elk/kibana/conf/
二、创建docker-compose.yml (linux没装docker和docker-compose的先自行百度装一下)
进入上述创建的elk目录
cd /usr/local/share/elk/
创建docker-compose.yml
vi docker-compose.yml,然后将下面的代码粘贴到文件中,最后保存即可
#docker-compose.yml
version: '3.3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.5.3
restart: always
container_name: elasticsearch
ports:
- "9200:9200"
- "9300:9300"
environment:
- discovery.type=single-node
- xpack.security.enabled=false
volumes:
- /usr/local/share/elk/elasticsearch/data:/usr/share/elasticsearch/data"
networks:
- elk-net
logstash:
image: docker.elastic.co/logstash/logstash:8.5.3
restart: always
container_name: logstash
ports:
- "5044:5044/tcp"
- "9600:9600/tcp"
volumes:
- /usr/local/share/elk/logstash/pipeline:/usr/share/logstash/pipeline
depends_on:
- elasticsearch
networks:
- elk-net
kibana:
image: docker.elastic.co/kibana/kibana:8.5.3
restart: always
container_name: kibana
ports:
- "5601:5601"
#volumes:
#- /usr/local/share/elk/kibana/conf:/usr/share/kibana/config
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
depends_on:
- elasticsearch
networks:
- elk-net
networks:
elk-net:
三、创建logstash.conf
进入上述创建的/elk/logstash/pipeline目录
cd /usr/local/share/elk/logstash/pipeline
创建logstash.conf
vi logstash.conf
将下面的代码复制粘贴到文件中保存即可
#logstash.conf
input {
beats {
port => 5044
}
}
filter {
if [tags] =~ /wclflow/ {
# 解析时间戳
date {
match => [ "message", "yyyy-MM-dd HH:mm:ss.SSS" ]
target => "@timestamp"
}
# 提取其他字段
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{GREEDYDATA:remaining_message}" }
add_field => [ "[@metadata][timestamp]", "%{timestamp}" ]
}
# 进一步分割日志
grok {
match => { "remaining_message" => "%{DATA:thread} %{LOGLEVEL:level} %{GREEDYDATA:message}" }
}
# 移除原始消息中的时间戳
mutate {
remove_field => [ "message" ]
}
# 添加时间戳字段
date {
match => [ "timestamp", "yyyy-MM-dd HH:mm:ss.SSS" ]
target => "@timestamp"
}
}
}
output {
elasticsearch {
hosts => ["http://elasticsearch:9200"]
index => "wclflow-logs-%{+YYYY.MM.dd.HH.mm.ss}"
}
}
四、启动ELK
先进入/usr/local/share/elk目录,然后启动ELK
cd /usr/local/share/elk && docker-compose up -d
启动命令执行后,等待十多秒后,查看容器运行情况,如下图表示启动成功
五、kibana汉化
将kibana容器里的配置文件复制出来,修改后再复制到容器里
进入第一步创建的kibana配置目录
cd /usr/local/share/elk/kibana/conf/
复制容器内的配置文件到当前目录
docker cp kibana:/usr/share/kibana/config/kibana.yml .
修改kibana.yml,在最后一行回车后,加上这段内容:i18n.locale: zh-CN
如下图填写,保存即可
保存成功后,将kibana.yml复制回容器,然后重启kibana服务
复制回容器
docker cp ./kibana.yml kibana:/usr/share/kibana/config/
重启kibana服务
docker-compose up -d kibana
六、浏览器访问
docker ps查看容器状态,如图所示,表示ELK服务已正常运行
打开浏览器访问http://192.168.7.46:5601/,注意ip是你的服务器ip,端口就是5601,首次访问页面如下,需要先配置Elastic,我们选择自己手动配置一下,然后配置Elastic服务地址,修改下ip和端口,如我就是配置的是192.168.7.46:9200
如果汉化不生效,试试docker restart kibana 或者 systemctl restart kibana这两个个命令,分别试一下。我的重启后再次访问,效果如下: