Kibana 本地安装使用
一 Kibana简介
1.1 Kibana 是一种数据可视化工具,通常需要结合Elasticsearch使用:
-
Elasticsearch 是一个实时分布式搜索和分析引擎。
-
Logstash 为用户提供数据采集、转换、优化和输出的能力。
-
Kibana 是一种数据可视化工具,为 Elasticsearch 提供了强大的可视化界面。
-
Filebeat 是一个轻量级的日志采集器,从各种来源收集和转发日志数据
-
Elasticsearch分词插件,IK分词器是一个用于中文文本处理的分词工具
二 下载和配置Elasticsearch
2.1 下载和解压 Elasticsearch
https://www.elastic.co/cn/downloads/elasticsearch
2.2 配置 Elasticsearch,打开压缩目录,找到 elasticsearch.yml
配置以下内容
network.host: 127.0.0.1
http.port: 9200
discovery.seed_hosts: ["127.0.0.1"]
http.host: 0.0.0.
完整配置文件
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
# cluster.name: rbdc-elk-stack
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
# node.name: LAPTOP-G67LH5N6
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
# path.data: /data/elasticsearch
#
# Path to log files:
#
# path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: 127.0.0.1
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: ["127.0.0.1"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Allow wildcard deletion of indices:
#
#action.destructive_requires_name: false
#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically
# generated to configure Elasticsearch security features on 21-08-2024 06:51:43
#
# --------------------------------------------------------------------------------
# Enable security features
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
enabled: false
keystore.path: certs/http.p12
# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
enabled: false
verification_mode: certificate
keystore.path: certs/transport.p12
truststore.path: certs/transport.p12
# Create a new cluster with the current node only
# Additional nodes can still join the cluster later
cluster.initial_master_nodes: ["LAPTOP-G67LH5N6"]
# Allow HTTP API connections from anywhere
# Connections are encrypted and require user authentication
http.host: 0.0.0.0
# Allow other nodes to join the cluster from anywhere
# Connections are encrypted and mutually authenticated
#transport.host: 0.0.0.0
#----------------------- END SECURITY AUTO CONFIGURATION -------------------------
2.3 配置Elasticsearch
由于Elasticsearch禁止用超级用户启动,所以可以用子用户(kibana_system)启动
如果忘记密码可以用指令修改密码
bin/elasticsearch-reset-password -u kibana_system
如果是token启动可以用以下指令获取新的token
bin/elasticsearch-create-enrollment-token --scope kibana
2.4 启动
bin/elasticsearch
# 后台启动
/bin/elasticsearch -d
第一次启动会返回超级用户的密码和token,最好保存下
启动成功日志
2.5 访问:http://localhost:9200/ ,输入密码后出现以下页面表示启动成功
三 下载和配置kibana
3.1 下载和解压 Kibana
https://www.elastic.co/cn/downloads/kibana
3.2 配置 kibana,打开压缩目录,找到 kibana.yml
配置以下内容
server.port: 5601
server.host: "0.0.0.0"
server.maxPayload: 1048576
elasticsearch.hosts: ["http://127.0.0.1:9200"]
elasticsearch.username: "kibana_system"
elasticsearch.password: "vsrW2gnx+gcSLiGT9f8e"i18n.locale: "zh-CN"
完整配置文件
# For more configuration options see the configuration guide for Kibana in
# https://www.elastic.co/guide/index.html
i18n.locale: "zh-CN"
# =================== System: Kibana Server ===================
# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601
# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "0.0.0.0"
# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
# server.basePath: "/kibana"
# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# Defaults to `false`.
#server.rewriteBasePath: false
# Specifies the public URL at which Kibana is available for end users. If
# `server.basePath` is configured this URL should end with the same basePath.
#server.publicBaseUrl: ""
# The maximum payload size in bytes for incoming server requests.
server.maxPayload: 1048576
# The Kibana server's name. This is used for display purposes.
# server.name: "LAPTOP-G67LH5N6"
#国际华-中文
#il8n.locale: "zh-CN"
# =================== System: Kibana Server (Optional) ===================
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
# server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key
# =================== System: Elasticsearch ===================
# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["http://127.0.0.1:9200"]
# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
elasticsearch.username: "kibana_system"
elasticsearch.password: "vsrW2gnx+gcSLiGT9f8e"
# Kibana can also authenticate to Elasticsearch via "service account tokens".
# Service account tokens are Bearer style tokens that replace the traditional username/password based configuration.
# Use this token instead of a username/password.
#elasticsearch.serviceAccountToken: "eyJ2ZXIiOiI4LjE0LjAiLCJhZHIiOlsiMTAuMTEwLjEwLjU6OTIwMCJdLCJmZ3IiOiI2ZTRhZDc1ZmYxNWZkOWZkYjYxYzExOTZjZjY1YzY0YTNlMjc2MTE2MjNmYTc5MjJmNjYxMzBhNjMzZjY2M2IyIiwia2V5Ijoib183MGZaRUJzRFVFSXhTaW93RTg6VFFDMzNzM1VRRXVhRXN4cXF4Vy1sZyJ9"
# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
elasticsearch.pingTimeout: 100000
# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
elasticsearch.requestTimeout: 100000
# The maximum number of sockets that can be used for communications with elasticsearch.
# Defaults to `Infinity`.
#elasticsearch.maxSockets: 1024
# Specifies whether Kibana should use compression for communications with elasticsearch
# Defaults to `false`.
#elasticsearch.compression: false
# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]
# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}
# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
elasticsearch.shardTimeout: 100000
# =================== System: Elasticsearch (Optional) ===================
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key
# Enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
# elasticsearch.ssl.certificateAuthorities: ["/opt/module/elasticsearch-8.1.0/config/certs/elasticsearch-ca.pem"]
# To disregard the validity of SSL certificates, change this setting's value to 'none'.
# elasticsearch.ssl.verificationMode: none
# =================== System: Logging ===================
# Set the value of this setting to off to suppress all logging output, or to debug to log everything. Defaults to 'info'
#logging.root.level: debug
# Enables you to specify a file where Kibana stores log output.
#logging.appenders.default:
# type: file
# fileName: /var/logs/kibana.log
# layout:
# type: json
# Example with size based log rotation
#logging.appenders.default:
# type: rolling-file
# fileName: /var/logs/kibana.log
# policy:
# type: size-limit
# size: 256mb
# strategy:
# type: numeric
# max: 10
# layout:
# type: json
# Logs queries sent to Elasticsearch.
#logging.loggers:
# - name: elasticsearch.query
# level: debug
# Logs http responses.
#logging.loggers:
# - name: http.server.response
# level: debug
# Logs system usage information.
#logging.loggers:
# - name: metrics.ops
# level: debug
# Enables debug logging on the browser (dev console)
#logging.browser.root:
# level: debug
# =================== System: Other ===================
# The path where Kibana stores persistent data not saved in Elasticsearch. Defaults to data
#path.data: data
# Specifies the path where Kibana creates the process ID file.
#pid.file: /run/kibana/kibana.pid
# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000ms.
#ops.interval: 5000
# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English (default) "en", Chinese "zh-CN", Japanese "ja-JP", French "fr-FR".
#i18n.locale: "en"
# =================== Frequently used (Optional)===================
# =================== Saved Objects: Migrations ===================
# Saved object migrations run at startup. If you run into migration-related issues, you might need to adjust these settings.
# The number of documents migrated at a time.
# If Kibana can't start up or upgrade due to an Elasticsearch `circuit_breaking_exception`,
# use a smaller batchSize value to reduce the memory pressure. Defaults to 1000 objects per batch.
#migrations.batchSize: 1000
# The maximum payload size for indexing batches of upgraded saved objects.
# To avoid migrations failing due to a 413 Request Entity Too Large response from Elasticsearch.
# This value should be lower than or equal to your Elasticsearch cluster’s `http.max_content_length`
# configuration option. Default: 100mb
#migrations.maxBatchSizeBytes: 100mb
# The number of times to retry temporary migration failures. Increase the setting
# if migrations fail frequently with a message such as `Unable to complete the [...] step after
# 15 attempts, terminating`. Defaults to 15
#migrations.retryAttempts: 15
# =================== Search Autocomplete ===================
# Time in milliseconds to wait for autocomplete suggestions from Elasticsearch.
# This value must be a whole number greater than zero. Defaults to 1000ms
unifiedSearch.autocomplete.valueSuggestions.timeout: 100000
# Maximum number of documents loaded by each shard to generate autocomplete suggestions.
# This value must be a whole number greater than zero. Defaults to 100_000
#unifiedSearch.autocomplete.valueSuggestions.terminateAfter: 100000
3.3 启动 Kibana
.\bin\kibana.bat
# 后台启动
.\bin\kibana.bat -d
启动日志
3.4 访问:http://localhost:5601/
提示你去登录:
如果账户不对会提示没有权限访问页面
换账号登录超级账户可以:我这个是elastic账户
点击Explore on my owm查看数据
3.5 首先要有集群索引,可以在开发者控制台增加索引
示例:
# Welcome to the Dev Tools Console!
#
# You can use Console to explore the Elasticsearch API. See the Elasticsearch API reference to learn more:
# https://www.elastic.co/guide/en/elasticsearch/reference/current/rest-apis.html
#
# Here are a few examples to get you started.
# Create an index
PUT /tang
# Add a document to my-index
POST /tang/_doc
{
"id": "park_rocky-mountain",
"title": "Rocky Mountain",
"description": "Bisected north to south by the Continental Divide, this portion of the Rockies has ecosystems varying from over 150 riparian lakes to montane and subalpine forests to treeless alpine tundra."
}
# Perform a search in my-index
GET /tang/_search?q="rocky mountain"
# delete a document in my-index
DELETE /tang/_doc/LqRAp5EBDLwLf65W04j
进入发现面板就能看到刚才添加的数据
3.6 或者通过Management 面板管理索引
四 Logstash部署与使用
4.1 下载和解压 logstash
https://www.elastic.co/cn/downloads/logstash
4.2 打开压缩目录
4.3 运行hello world , 标准输入输出作为input和output来启动,没有filter:
./bin/logstash -e 'input { stdin {} } output { stdout {} }'
或格式化输出
./bin/logstash -e 'input{stdin{}} output{stdout{codec=>rubydebug}}'
启动成功后此时命令窗口停留在等待输入状态,键盘键入任意字符:
PS D:\Downloads\logstash-8.15.0> ./bin/logstash -e 'input { stdin {} } output { stdout {} }'
"Using bundled JDK: D:\Downloads\logstash-8.15.0\jdk\bin\java.exe"
Sending Logstash logs to D:/Downloads/logstash-8.15.0/logs which is now configured via log4j2.properties
[2024-08-31T14:15:54,752][INFO ][logstash.runner ] Log4j configuration path used is: D:\Downloads\logstash-8.15.0\config\log4j2.properties
[2024-08-31T14:15:54,756][WARN ][logstash.runner ] The use of JAVA_HOME has been deprecated. Logstash 8.0 and later ignores JAVA_HOME and uses the bundled JDK. Running Logstash with the bundled JDK is recommended. The bundled JDK has been verified to work with each specific version of Logstash, and generally provides best performance and reliability. If you have compelling reasons for using your own JDK (organizational-specific compliance requirements, for example), you can configure LS_JAVA_HOME to use that version instead.
[2024-08-31T14:15:54,756][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"8.15.0", "jruby.version"=>"jruby 9.4.8.0 (3.1.4) 2024-07-02 4d41e55a67 OpenJDK 64-Bit Server VM 21.0.4+7-LTS on 21.0.4+7-LTS +indy +jit [x86_64-mswin32]"}
[2024-08-31T14:15:54,759][INFO ][logstash.runner ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Dlogstash.jackson.stream-read-constraints.max-string-length=200000000, -Dlogstash.jackson.stream-read-constraints.max-number-length=10000, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED, -Dio.netty.allocator.maxOrder=11]
[2024-08-31T14:15:54,763][INFO ][logstash.runner ] Jackson default value override `logstash.jackson.stream-read-constraints.max-string-length` configured to `200000000`
[2024-08-31T14:15:54,763][INFO ][logstash.runner ] Jackson default value override `logstash.jackson.stream-read-constraints.max-number-length` configured to `10000`
[2024-08-31T14:15:54,794][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2024-08-31T14:15:56,716][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2024-08-31T14:15:56,783][INFO ][org.reflections.Reflections] Reflections took 99 ms to scan 1 urls, producing 138 keys and 481 values
[2024-08-31T14:15:57,152][INFO ][logstash.javapipeline ] Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2024-08-31T14:15:57,173][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>12, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1500, "pipeline.sources"=>["config string"], :thread=>"#<Thread:0x39137235 D:/Downloads/logstash-8.15.0/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
[2024-08-31T14:15:57,693][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>0.52}
[2024-08-31T14:15:57,769][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
The stdin plugin is now waiting for input:
[2024-08-31T14:15:57,778][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
可以看到会返回以下结果:
hello world
{
"@version" => "1",
"message" => "hello world\r",
"@timestamp" => 2024-08-31T06:16:19.422258900Z,
"event" => {
"original" => "hello world\r"
},
"host" => {
"hostname" => "LAPTOP-G67LH5N6"
}
}
4.4 Logstash的事件(logstash将数据流中等每一条数据称之为一个event)处理流水线有三个主要角色完成:inputs –> filters –> outputs:
- inpust:必须,负责产生事件(Inputs generate events),常用:File、syslog、redis、beats(如:Filebeats)
- filters:可选,负责数据处理与转换(filters modify them),常用:grok、mutate、drop、clone、geoip
- outpus:必须,负责数据输出(outputs ship them elsewhere),常用:elasticsearch、file、graphite、statsd
- 其中inputs和outputs支持codecs(coder&decoder)在1.3.0 版之前,logstash 只支持纯文本形式输入,然后以过滤器处理它。但现在,我们可以在输入 期处理不同类型的数据,所以完整的数据流程应该是:input | decode | filter | encode | output;codec 的引入,使得 logstash 可以更好更方便的与其他有自定义数据格式的运维产品共存,比如:graphite、fluent、netflow、collectd,以及使用 msgpack、json、edn 等通用数据格式的其他产品等
4.5 常用输入事件
4.5.0 标准输入(Stdin)
- 最简单和基础的插件
input { stdin { add_field => {"key" => "value"} codec => "plain" tags => ["add"] type => "std" } }
4.5.1 File读取插件:
- 文件读取插件主要用来抓取文件的变化信息,将变化信息封装成Event进程处理或者传递
input file { path => ["/var/log/*.log", "/var/log/message"] type => "system" start_position => "beginning" } }
4.5.2 Beats监听插件
- Beats插件用于建立监听服务,接收Filebeat或者其他beat发送的Events;
input { beats { port => 5044 } }
4.5.3 TCP监听插件
- TCP插件有两种工作模式,“Client”和“Server”,分别用于发送网络数据和监听网络数据。
tcp { port => 41414 }
4.5.4 Redis读取插件
- 用于读取Redis中缓存的数据信息。
input { redis { host => "127.0.0.1" port => 6379 data_type => "list" key => "logstash-list" } }
4.5.5 Syslog监听插件
- 监听操作系统syslog信息
syslog { }
4.6 常用过滤插件(Filter plugin)
4.6.1 grok正则捕获
- grok 是Logstash中将非结构化数据解析成结构化数据以便于查询的最好工具,非常适合解析syslog logs,apache log, mysql log,以及一些其他的web log
input { file { path => "/var/log/http.log" } } filter { grok { match => {"message" => "%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}"} } }
4.6.2 date时间处理插件
- 该插件用于时间字段的格式转换,比如将“Apr 17 09:32:01”(MMM dd HH:mm:ss)转换为“MM-dd HH:mm:ss”。而且通常情况下,Logstash会为自动给Event打上时间戳,但是这个时间戳是Event的处理时间(主要是input接收数据的时间),和日志记录时间会存在偏差(主要原因是buffer),我们可以使用此插件用日志发生时间替换掉默认是时间戳的值。
filter { grok { match => ["message", "%{HTTPDATE:logdate}"] } date { match => ["logdate", "dd/MMM/yyyy:HH:mm:ss Z"] } }
4.6.3 mutate数据修改插件
- mutate 插件是 Logstash另一个重要插件。它提供了丰富的基础类型数据处理能力。可以重命名,删除,替换和修改事件中的字段。
filter { mutate { convert => ["request_time", "float"] } }
4.6.4 JSON插件
- JSON插件用于解码JSON格式的字符串,一般是一堆日志信息中,部分是JSON格式,部分不是的情况下
filter { json { source => "{\"uid\":3081609001,\"type\":\"signal\"}" target => "jsoncontent" } }
4.6.5 elasticsearch查询过滤插件
- 用于查询Elasticsearch中的事件,可将查询结果应用于当前事件中
input { #beats { # port => 5044 #} stdin { } } output { elasticsearch { hosts => ["http://127.0.0.1:9200"] index => "logstash-%{+YYYY.MM.dd}" user => "elastic" password => "123456" } }
4.7 常用输出插件(Output plugin)
4.7.1 ElasticSearch输出插件
- 用于将事件信息写入到Elasticsearch中,官方推荐插件,ELK必备插件
output { elasticsearch { hosts => ["127.0.0.1:9200"] index => "filebeat-%{type}-%{+yyyy.MM.dd}" template_overwrite => true } }
4.7.2 Redis输出插件
- 用于将Event写入Redis中进行缓存,通常情况下Logstash的Filter处理比较吃系统资源,复杂的Filter处理会非常耗时,如果Event产生速度比较快,可以使用Redis作为buffer使用
output { redis { host => "127.0.0.1" port => 6379 data_type => "list" key => "logstash-list" } }
4.7.3 File输出插件
- 用于将Event输出到文件内
output { file { path => ... codec => line { format => "custom format: %{message}"} } }
4.7.4 TCP插件
- 通过TCP套接字写入事件。每个事件json都用换行符分隔。可以接受来自客户端的连接,也可以连接到服务器
tcp { host => 127.0.0.1 port => 80 }
4.8 常用编码插件(Codec plugin)
4.8.1 JSON编码插件
- 直接输入预定义好的 JSON 数据,这样就可以省略掉 filter/grok 配置
json '{"@timestamp":"$time_iso8601",' '"@version":"1",' '"host":"$server_addr",' '"client":"$remote_addr",' '"size":$body_bytes_sent,' '"responsetime":$request_time,' '"domain":"$host",' '"url":"$uri",' '"status":"$status"}';
4.9 简单示例 Logstash输出到 elasticsearch
4.9.1 创建一个名为“logstash-simple.conf”的文件,并将其保存在与logstash相同的目录中。
4.9.2 编辑以下内容
input { #beats { # port => 5044 #} stdin { } } output { elasticsearch { hosts => ["http://127.0.0.1:9200"] index => "logstash-%{+YYYY.MM.dd}" user => "elastic" password => "123456" } }
4.9.3 指定配置启动logstash
./bin/logstash -f config/logstash-sample.conf
启动后会提醒你去输入,我们输入任意内容
然后去刷新kibana,我们可以看到增加了刚输入的数据
解析后会看到ID相关信息
{ "_index": "tang", "_id": "L6Rfp5EBDLwLf65WKIj5", "_version": 1, "_score": 1, "fields": { "@timestamp": [ "2024-08-31T07:38:37.042Z" ], "event.original": [ "hello world\r" ], "event.original.keyword": [ "hello world\r" ], "message.keyword": [ "hello world\r" ], "@version": [ "1" ], "host.hostname.keyword": [ "LAPTOP-G67LH5N6" ], "@version.keyword": [ "1" ], "host.hostname": [ "LAPTOP-G67LH5N6" ], "message": [ "hello world\r" ] } }
五 下载和配置Filebeat
5.1 下载filebeat
https://www.elastic.co/cn/downloads/beats/filebeat
5.2 配置filebeat,打开压缩目录,找到 filebeat.yml
配置filebeat.yml,输出日志到elasticsearch
filebeat.inputs:
- type: log
enabled: true
encoding: utf-8
paths:
- D:\Downloads\logs\*.log
fields:
level: info
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
setup.template.name: "tang"
setup.template.pattern: "tang-*"
setup.kibana:
host: "localhost:5601"
username: "dong"
password: "Aa123456.."
output.elasticsearch:
hosts: ["localhost:9200"]
rotocol: "http"
ssl.verification_mode: "none"
username: "elastic"
password: "y43xtubPe6-O85aoCK9Y"
index: "tang-%{+yyyy.MM.dd}"
indices:
- index: "tang-%{+yyyy.MM.dd}"
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
5.3 进去安装目录,输入启动命令
./filebeat -e -c filebeat.yml
运行成功
5.4 上传日志
在filebeat配置的日志目录,添加log.txt文件,内容如下,保存之后会在Kibana平台看到刚保存的日志信息
{"log.level":"error","message":"{"msg":null,"code":200,"success":true,"data":[{"storeSkuId":"32544","storeSkuName":"11111","salePrice":"2","advertisingWords":null,"img":null,"unitCode":"KG","unitCodeName":"KG","intro":null,"stock":null,"levelOneId":null,"levelOneName":null,"levelTwoId":null,"levelTwoName":null,"levelThreeId":null,"levelThreeName":null,"specification":null,"standardSkuCode":"P03554807","standardSkuId":958,"activityList":[],"activityPrice":null,"authCode":null,"goodsType":2,"skuType":1,"barcode":"1105026100002","allowDecimal":true,"isCustomizableBarcode":true,"showQuantity":50.000,"showSalePrice":null}]}"
}