环境信息
使用的 hadoop 完全分布式集群
1 | 192.168.2.241 hadoop01 |
filebeat 安装
官网 https://www.elastic.co/cn/downloads/beats/filebeat
所有节点 root 用户1
2
3
4
5
6wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.2.0-linux-x86_64.tar.gz
mkdir -p /opt/bigdata/filebeat
tar -zxf filebeat-8.2.0-linux-x86_64.tar.gz -C /opt/bigdata/filebeat
cd /opt/bigdata/filebeat/
ln -s filebeat-8.2.0-linux-x86_64 current
因为已经安装 kafka, 输出到 kafka 中
/opt/bigdata/filebeat/current/filebeat.yml 按需修改1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/secure # 收集登录日志
fields:
log_topic: omessages
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
name: "hadoop01" # 按需修改
setup.kibana:
output.kafka:
enabled: true
hosts: ["hadoop01:9092", "hadoop02:9092", "hadoop03:9092"]
version: "0.10"
topic: 'my_test'
codec.format.string: '%{[message]}' # 输出原始格式, 删除则输出 json 处理后
partition.round_robin:
reachable_only: true
worker: 2
required_acks: 1
compression: gzip
max_message_bytes: 10000000
logging.level: debug
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
- drop_fields: # 删除的字样
fields: ["input", "host", "agent.type", "agent.ephemeral_id", "agent.id", "agent.version", "ecs"]
测试
root 用户启动 filebeat1
2cd /opt/bigdata/filebeat/current
nohup ./filebeat -e -c filebeat.yml &
修改配置后,重启 fiilebeat, ssh 连接主机刷新日志
kafka 显示
1 | $ cd /opt/bigdata/kafka/current/bin |