Filebeat 如何收集 logback 日志?【图文】_doudio

filebeat 收集 logback 日志

1. 使用 logstash-logback-encoder 将日志输出为 ElasticSearch 的格式
2. 通过 filebeat processors 对字段进行过滤


  • 添加依赖, 使用 ​​logstash-logback-encoder​​ 将日志输出为 ElasticSearch 的格式
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>6.6</version>
</dependency>

  • 在 ​​resources​​ 下创建 ​​logback-spring.xml​​ 并配置
<?xml version="1.0" encoding="UTF-8"?>
<configuration scan="true" scanPeriod="60 seconds" debug="false">

<!-- 定义参数常量 -->
<!-- TRACE<DEBUG<INFO<WARN<ERROR -->
<!-- logger.trace("msg") logger.debug... -->
<property name="log.level" value="debug"/>
<property name="log.maxHistory" value="2"/>
<!--修改此处-->
<property name="log.filePath" value="./logs/demo"/>
<property name="log.pattern" value="%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n"/>

<appender name="stash" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${log.filePath}/elastic.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${log.filePath}/elastic.%d{yyyy-MM-dd}.log.gz</fileNamePattern>
<totalSizeCap>20GB</totalSizeCap>
</rollingPolicy>
<encoder class="net.logstash.logback.encoder.LogstashEncoder"/>
</appender>

<root name="com.example" level="${log.level}" additivity="true">
<appender-ref ref="stash" />
</root>

</configuration>

  • 通过格式化后的日志会变成 ​​tail ./logs/demo/elastic.log​
{"@timestamp":"2021-09-10T16:07:00.000+08:00","@version":"1","message":"debug.task: 59a52175-190c-46c0-af49-55fa293d6f12","logger_name":"com.example.elasticlog.task.LogTask","thread_name":"scheduling-1","level":"DEBUG","level_value":10000}
{"@timestamp":"2021-09-10T16:07:00.000+08:00","@version":"1","message":"info.task: 10cc0356-2831-4c1a-ab3b-af072c298e71","logger_name":"com.example.elasticlog.task.LogTask","thread_name":"scheduling-1","level":"INFO","level_value":20000}
{"@timestamp":"2021-09-10T16:07:00.000+08:00","@version":"1","message":"warn.task: 1ea1457f-9d3a-4782-ab80-bb21e5efa617","logger_name":"com.example.elasticlog.task.LogTask","thread_name":"scheduling-1","level":"WARN","level_value":30000}

filebeat 追加配置 收集日志到 ​​ElasticSearch​​|​​kafka​

#================================ Processors =====================================
# 官方文档地址: https://www.elastic.co/guide/en/beats/filebeat/6.8/filtering-and-enhancing-data.html#filtering-and-enhancing-data
# include_fields 保留那些字段
# decode_json_fields: json解码
# drop_fields: 删除字段
# rename: 字段重命名

processors:
- include_fields:
fields: ["message"]
- decode_json_fields:
fields: ["message", "logger_name", "thread_name", "level", "level_value"]
process_array: false
max_depth: 1
target: "message"
overwrite_keys: true
- drop_fields:
fields: ["message.@timestamp", "message.@version"]
# - rename:
# fields:
# - from: "a.g"
# to: "e.d"
# ignore_missing: false
# fail_on_error: true

Filebeat 如何收集 logback 日志?【图文】_doudio

本站由小牛团队全力维护,小牛十年了,大家已经步入中年 。本站源码全部经过团队成员测试并调试,价格可能比其它网站略贵几元钱,不解释!
小牛资源 » Filebeat 如何收集 logback 日志?【图文】_doudio

发表评论

全站资源亲测可用,价格略高几元,不解释

立即查看 了解详情