当前位置: 首页 > article >正文

Springboot集成syslog+logstash收集日志到ES

Springboot集成syslog+logstash收集日志到ES

1、背景

Logstash 是一个实时数据收集引擎,可收集各类型数据并对其进行分析,过滤和归纳。按照自己条件分析过滤出符合的数据,导入到可视化界面。它可以实现多样化的数据源数据全量或增量传输,数据标准格式处理,数据格式化输出等的功能,常用于日志处理。工作流程分为三个阶段:

  1. input数据输入阶段,可接收oracle、mysql、postgresql、file等多种数据源;
  2. filter数据标准格式化阶段,可过滤、格式化数据,如格式化时间、字符串等;
  3. output数据输出阶段,可输出到elasticsearch、mongodb、kafka等接收终端。

架构原理:springboot发出syslog日志,通过系统的rsyslog服务进行数据转发,logstash监听rsyslog端口过滤数据并发到es进行存储

2、springboot集成syslog

maven依赖:

<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-api</artifactId>
    <version>1.7.7</version>
</dependency>
<dependency>
    <groupId>ch.qos.logback</groupId>
    <artifactId>logback-core</artifactId>
    <version>1.1.7</version>
</dependency>
<dependency>
    <groupId>ch.qos.logback</groupId>
    <artifactId>logback-classic</artifactId>
    <version>1.1.7</version>
</dependency>

logback.xml文件配置

配置好日志之后,在root标签中添加appender才能生效;

<?xml version="1.0" encoding="UTF-8"?>
<configuration debug="false">
  <!-- 控制台输出 -->
  <appender name="consoleLogAppender" class="ch.qos.logback.core.ConsoleAppender">
    <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
      <level>INFO</level>
    </filter>
    <encoder>
      <pattern>%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{20} - %cyan(%.-3072msg %n)</pattern>
    </encoder>
  </appender>

  <appender name="infoFileAppender" class="ch.qos.logback.core.rolling.RollingFileAppender">
    <File>./logs/service.log</File>
    <filter class="ch.qos.logback.classic.filter.LevelFilter">
      <level>INFO</level>
      <onMatch>ACCEPT</onMatch>
      <onMismatch>DENY</onMismatch>
    </filter>
    <encoder>
      <pattern>%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{20} - %cyan(%.-3072msg %n)</pattern>
    </encoder>
    <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
      <fileNamePattern>./logs/service-log-%d{yyyy-MM-dd}.log</fileNamePattern>
      <maxHistory>15</maxHistory>
      <totalSizeCap>5GB</totalSizeCap>
    </rollingPolicy>
  </appender>
  <appender name="errorFileAppender" class="ch.qos.logback.core.rolling.RollingFileAppender">
    <File>./logs/service-error.log</File>
    <filter class="ch.qos.logback.classic.filter.LevelFilter">
      <level>ERROR</level>
      <onMatch>ACCEPT</onMatch>
      <onMismatch>DENY</onMismatch>
    </filter>
    <encoder>
      <pattern>%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{20} - %cyan(%.-3072msg %n)</pattern>
    </encoder>
    <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
      <fileNamePattern>./logs/service-error.log.%d{yyyy-MM-dd}.log</fileNamePattern>
      <maxHistory>15</maxHistory>
      <totalSizeCap>5GB</totalSizeCap>
    </rollingPolicy>
  </appender>

  <appender name="msgAppender" class="ch.qos.logback.core.rolling.RollingFileAppender">
    <File>./logs/service-msg.log</File>
    <filter class="ch.qos.logback.classic.filter.LevelFilter">
      <level>INFO</level>
      <onMatch>ACCEPT</onMatch>
      <onMismatch>DENY</onMismatch>
    </filter>
    <encoder>
      <pattern>%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{20} - %cyan(%.-3072msg %n)</pattern>
    </encoder>
    <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
      <fileNamePattern>./logs/service-msg-%d{yyyy-MM-dd}.log</fileNamePattern>
      <maxHistory>5</maxHistory>
      <totalSizeCap>5GB</totalSizeCap>
    </rollingPolicy>
  </appender>

  <appender name="taskAppender" class="ch.qos.logback.core.rolling.RollingFileAppender">
    <File>./logs/service-task.log</File>
    <filter class="ch.qos.logback.classic.filter.LevelFilter">
      <level>INFO</level>
      <onMatch>ACCEPT</onMatch>
      <onMismatch>DENY</onMismatch>
    </filter>
    <encoder>
      <pattern>%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{20} - %cyan(%.-3072msg %n)</pattern>
    </encoder>
    <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
      <fileNamePattern>./logs/service-task-%d{yyyy-MM-dd}.log</fileNamePattern>
      <maxHistory>5</maxHistory>
      <totalSizeCap>5GB</totalSizeCap>
    </rollingPolicy>
  </appender>

  <appender name="mybatisplus" class="ch.qos.logback.core.rolling.RollingFileAppender">
    <File>./logs/service-mybatisplus.log</File>
    <filter class="ch.qos.logback.classic.filter.LevelFilter">
      <level>DEBUG</level>
      <onMatch>ACCEPT</onMatch>
      <onMismatch>DENY</onMismatch>
    </filter>
    <encoder>
      <pattern>%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{20} - %cyan(%.-3072msg %n)</pattern>
    </encoder>
    <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
      <fileNamePattern>./logs/service-mybatisplus-%d{yyyy-MM-dd}.log</fileNamePattern>
      <maxHistory>5</maxHistory>
      <totalSizeCap>5GB</totalSizeCap>
    </rollingPolicy>
  </appender>
  
  <!-- 定义一个 SyslogAppender -->
  <appender name="SYSLOG" class="ch.qos.logback.classic.net.SyslogAppender">
    <syslogHost>localhost</syslogHost>
    <port>12525</port>
    <facility>LOCAL0</facility> <!-- 设置 Syslog 设施,这意味着服务发送到 Syslog 服务器的所有日志都将被标记为来源于 LOCAL0 -->
    <filter class="ch.qos.logback.classic.filter.LevelFilter">
      <level>WARN</level>
      <onMatch>ACCEPT</onMatch>
      <onMismatch>DENY</onMismatch>
    </filter>
    <suffixPattern>
      [%d{yyyy-MM-dd HH:mm:ss.SSS}] - [%p] - [%X{app:-${app}}] - [%thread] - [%logger{36}.%M] - %msg%n
    </suffixPattern>
  </appender>

  <logger name="msgLogger" level="info"  additivity="false">
    <appender-ref ref="msgAppender" />
  </logger>

  <logger name="taskLogger" level="info"  additivity="false">
    <appender-ref ref="taskAppender" />
  </logger>

<!--  <logger name="com.zbnsec.opera.project.simulator.framework.task" level="DEBUG">
    <appender-ref ref="mybatisplus" />
  </logger>-->

  <root level="INFO" additivity="false">
    <appender-ref ref="consoleLogAppender"/>
    <appender-ref ref="infoFileAppender"/>
    <appender-ref ref="errorFileAppender"/>
    <appender-ref ref="SYSLOG"/>
  </root>
</configuration>

SyslogAppender是syslog的配置:
syslogHost:指的是syslog服务器的主机名/IP地址
port:syslog服务器的监听端口,默认为514 udp
facility:标识消息的来源
suffixPattern:描述日志的格式

3、rsyslog接收springboot应用的日志

1、服务器安装rsyslog服务

apt install rsyslog 安装
systemctl start rsyslog 启动服务
systemctl status rsyslog 查看服务状态
systemctl enable rsyslog  设置rsyslog服务器在启动时自动运行

2、配置rsyslog.conf

rsyslog的配置文件位于:/etc/rsyslog.conf

global(workDirectory="/var/lib/rsyslog")
module(load="builtin:omfile" Template="RSYSLOG_TraditionalFileFormat")
include(file="/etc/rsyslog.d/*.conf" mode="optional")

*.* @@localhost:12515

*.info;mail.none;authpriv.none;cron.none                /var/log/messages
authpriv.*                                              /var/log/secure
mail.*                                                  -/var/log/maillog
cron.*                                                  /var/log/cron
*.emerg                                                 :omusrmsg:*
uucp,news.crit                                          /var/log/spooler
local7.*                                                /var/log/boot.log

以上配置转发了12525端口的syslog,@@代表udp;
如果此时需要系统日志,则需要以下配置:tail -500f /var/log/messages 则会看到系统日志一直在刷新保存

module(load="imuxsock"  SysSock.Use="off") 
module(load="imjournal"  StateFile="imjournal.state") 
module(load="imklog") 
module(load="immark") 
$imjournalRatelimitInterval 0

如果需要将sprigboot日志同时也存储在messages文件,则需要以下配置:
注意:这里监听12525端口,则在logstash启动时,同时监听12525,会出现端口占用,则logstash不会接收到springboot日志数据;

# 监听 UDP 端口
module(load="imudp")
input(type="imudp" port="12525")

# 监听 TCP 端口
module(load="imtcp")
input(type="imtcp" port="12525")

修改完配置之后,执行 systemctl restart rsyslog 重新启动服务

4、集成logstash

1、拉取logstash镜像

logstash的版本要和ES的版本一致,否则可能出现其他问题

docker pull docker.elastic.co/logstash/logstash:7.4.0

2、配置logstash

除了以下配置,其他的都使用logstash容器中的默认配置,可以起一个空的容器,把这些默认配置(config目录和pipeline目录)复制出来
logstash.yaml:

config.reload.automatic: true
config.reload.interval: 3s
http.host: "0.0.0.0"
path.logs: /usr/share/logstash/logs/

logstash.conf:

status = error
name = LogstashPropertiesConfig
appender.console.type = Console
appender.console.name = plain_console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %m%n
appender.json_console.type = Console
appender.json_console.name = json_console
appender.json_console.layout.type = JSONLayout
appender.json_console.layout.compact = true
appender.json_console.layout.eventEol = true
# Define Rolling File Appender
appender.rolling.type = RollingFile
appender.rolling.name = rolling
appender.rolling.fileName = ${sys:ls.logs}/logstash-plain.log
appender.rolling.filePattern = ${sys:ls.logs}/logstash-%d{yyyy-MM-dd}-%i.log.gz
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %m%n
appender.rolling.policies.type = Policies
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size = 100MB
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.max = 20
rootLogger.level = ${sys:ls.log.level}
rootLogger.appenderRef.console.ref = ${sys:ls.log.format}_console
rootLogger.appenderRef.rolling.ref = rolling

pipelines.yml: 在pipeline目录中配置几个管道,则在这里对应配置

- pipeline.id: system-syslog
  path.config: "/usr/share/logstash/pipeline/fscr-syslog.conf"

fscr-syslog.conf:

input {
  syslog {
    port => 12525
    type => "system-syslog"
  }
}
filter {
  if [type] == "system-syslog" {
    mutate {
      # Remove ANSI escape sequences
      gsub => [
        "message", "\e\[\d+(;\d+)*m", ""
      ]
    }
    if [message] =~ /^\[/ {
		  dissect {
		    mapping => {
		      "message" => "[%{timestamp}] - [%{loglevel}] - [%{app}] - [%{thread_info}] - [%{source_class}] - %{log_message}"
			      }
			    }
		  }
    mutate {
      # Convert "WARN" to "WARNING"
      gsub => [
        "loglevel", "^WARN$", "WARNING"
      ]
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
      add_field => [ "syslog_hostname", "%{logsource}" ]
      add_field => [ "syslog_severity", "%{loglevel}" ]
      add_field => [ "syslog_program", "%{app}" ]
      add_field => [ "syslog_message", "%{message}" ]
      add_field => [ "syslog_timestamp", "%{timestamp}" ]
      remove_field => ["severity_label", "facility_label", "facility", "priority"]
    }
    date {
      match => ["adjusted_received_at", "ISO8601"]
      timezone => "Asia/Shanghai"
      target => "@timestamp"
    }
  }
}

output {
  if [loglevel] == "WARNING" or [loglevel] == "ERROR" {
    elasticsearch {
      hosts => ["http://esHost:9200"]
      index => "logstash-%{+YYYY.MM.dd}"
      template_name => "logstash"   # 指定模板(该模板已经存在于es中)
      template_overwrite => false
    }
  }
	if [loglevel] == "WARNING" or [loglevel] == "ERROR" {
    stdout {
	    codec => rubydebug
	  }
  }
}

logstash.json索引文件:

{
  "name": "logstash",
  "order": 0,
  "version": 60001,
  "index_patterns": [
    "logstash-*"
  ],
  "settings": {
    "index": {
      "number_of_shards": "1",
      "refresh_interval": "5s"
    }
  },
  "mappings": {
    "dynamic_templates": [
      {
        "message_field": {
          "path_match": "message",
          "mapping": {
            "norms": false,
            "type": "text"
          },
          "match_mapping_type": "string"
        }
      },
      {
        "string_fields": {
          "mapping": {
            "norms": false,
            "type": "text",
            "fields": {
              "keyword": {
                "ignore_above": 256,
                "type": "keyword"
              }
            }
          },
          "match_mapping_type": "string",
          "match": "*"
        }
      }
    ],
    "properties": {
      "@timestamp": {
        "type": "date"
      },
      "geoip": {
        "dynamic": true,
        "properties": {
          "ip": {
            "type": "ip"
          },
          "latitude": {
            "type": "half_float"
          },
          "location": {
            "type": "geo_point"
          },
          "longitude": {
            "type": "half_float"
          }
        }
      },
      "@version": {
        "type": "keyword"
      }
    }
  },
  "aliases": {}
}

启动容器:

docker run --name logstash -itd --net=host \
	-v /opt/fscr/middleware/logstash/logstash/config:/usr/share/logstash/config \
	-v /opt/fscr/middleware/logstash/logstash/pipeline:/usr/share/logstash/pipeline \
	-p 5044:5044 -p 9600:9600 \
	logstash:8.8.0

容器启动后,无error日志,可以看到打印的日志信息,为正常启动;


http://www.kler.cn/a/384501.html

相关文章:

  • 基于ViT的无监督工业异常检测模型汇总
  • 硬件知识10 线性稳压电源——二极管稳压、射级跟随器稳压、集成电路稳压
  • 【LeetCode】【算法】283. 移动零
  • csrf令牌
  • PostgreSQL技术内幕17:PG分区表
  • [HCTF 2018]WarmUp 1--详细解析
  • Linux -- 操作系统(软件)
  • 软件测试—功能测试详解
  • 智能家居的未来:AI让生活更智能还是更复杂?
  • 【Linux】- 权限(2)
  • RK3568笔记六十八:Yolov11目标检测部署测试
  • 【redis】redis缓存和数据库保证一致性的方案
  • 香港航空 阿里滑块 acw_sc__v3 分析
  • 10DSP学习-利用syscfg配置ADC,并使用EPWM触发转换
  • Excel打开Python创建的csv文件乱码
  • 《Kotlin实战》-第09章:泛型
  • 【人工智能】ChatGPT多模型感知态识别
  • oneplus6-build.md
  • 浏览器中的事件循环
  • KTHREAD结构-->ApcState
  • HbuildderX运行到手机或模拟器的Android App基座识别不到设备 mac
  • Shiro安全认证技术实践
  • 神经网络基础--什么是正向传播??什么是方向传播??
  • Android 音视频合成经验总结
  • 基于MATLAB的人声音特征的识别和控制设计
  • 【go从零单排】go中的三种数据类型array、slices、maps