- A+
一.简介
要做好运维,不仅要看每个服务器的状态,还要查看服务器日志和应用日志。维护的服务器少还好,如果查看几十台或者上百台服务器,那岂不是累死人,而这时候我们就需要ELK日志分析系统了。 ELK是Elasticsearch、Logstash、Kibana的简称,这三者是核心套件,但并非全部。 Elasticsearch是实时全文搜索和分析引擎,提供搜集、分析、存储数据三大功能;是一套开放REST和JAVA API等结构提供高效搜索功能,可扩展的分布式系统。它构建于Apache Lucene搜索引擎库之上。 Logstash是一个用来搜集、分析、过滤日志的工具。它支持几乎任何类型的日志,包括系统日志、错误日志和自定义应用程序日志。它可以从许多来源接收日志,这些来源包括 syslog、消息传递(例如 RabbitMQ)和JMX,它能够以多种方式输出数据,包括电子邮件、websockets和Elasticsearch。 Kibana是一个基于Web的图形界面,用于搜索、分析和可视化存储在 Elasticsearch指标中的日志数据。它利用Elasticsearch的REST接口来检索数据,不仅允许用户创建他们自己的数据的定制仪表板视图,还允许他们以特殊的方式查询和过滤数据
二.elasticsearch
1.安装JDK
2.配置elasticsearch yum源
导入yum密钥
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
配置yum源
tee /etc/yum.repos.d/elasticsearch.repo <<-'EOF' [elasticsearch-5.x] name=Elasticsearch repository for 5.x packages baseurl=https://artifacts.elastic.co/packages/5.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md EOF
3.安装elasticsearch
yum install elasticsearch -y
4.创建数据和日志目录
mkdir -p /opt/elk/{data,logs}
修改文件属主
chown -R elasticsearch: /opt/elk/{data,logs}
5.vim /etc/sysctl.conf
vm.max_map_count=262144
sysctl -w vm.max_map_count=262144
6.修改默认配置文件elasticsearch.yml(也可在/etc/sysconfig/elasticsearch中配置,此处我们只修改默认配置文件)
vim /etc/elasticsearch/elasticsearch.yml
cluster.name: longs #设置集群名称 node.name: elk-1 #设置节点名称 path.data: /opt/elk/data #修改数据存放路径 path.logs: /opt/elk/logs #修改日志存放路径 bootstrap.memory_lock: true #配置内存使用交换分区 network.host: 0.0.0.0 #配置监听地址 http.port: 9200 #监听端口 discovery.zen.minimum_master_nodes #节点数配置,如果节点数多的话,此项必须配置(配置原则如果有三个主节点,那么最小主节点应设置为(3/2)+1 ,此处暂不配置 http.cors.enabled: true http.cors.allow-origin: "*" #增加新的参数,使head插件访问ES
7.启动服务
systemctl start elasticsearch
8.查看服务状态
systemctl status elasticsearch
9.如启动保存可以查看日志,文件名以集群名字定义
tail -500 /opt/elk/logs/longs.log
10.开机启动
systemctl enable elasticsearch
11.测试服务是否启动成功
curl -i http://127.0.0.1:9200 HTTP/1.1 200 OK content-type: application/json; charset=UTF-8 content-length: 317 { "name" : "elk-1", "cluster_name" : "longs", "cluster_uuid" : "ZRnuoTKyTdKhIYO9g48ruQ", "version" : { "number" : "5.5.1", "build_hash" : "19c13d0", "build_date" : "2017-07-18T20:44:24.823Z", "build_snapshot" : false, "lucene_version" : "6.6.0" }, "tagline" : "You Know, for Search" }
12.集群状态
curl http://127.0.0.1:9200/_cat/health
13.安装插件
cd /usr/share/elasticsearch/bin
列出目录的插件
./elasticsearch-plugin list
安装icu分析插件,使用ICU 库增加扩展的Unicode支持,包括更好地分析亚洲语言,Unicode标准化,Unicode感知案例折叠,排序规则支持和音译。
./elasticsearch-plugin install analysis-icu
安装Mapper插件核心插件Mapper Attachments Plugin,此插件允许将新的字段数据类型添加到Elasticsearch。而在5.0版本中此插件已被ingest-attachment plugin取代所以我们也可以直接安装ingest-attachment plugin,此处为了保险两个插件我都安装了
./elasticsearch-plugin install mapper-attachments ./elasticsearch-plugin install ingest-attachment
安装mapper-size插件,mapper-size插件提供了_size mets字段,该字段在启用时会以原始source字段的字节为单位进行索引。
./elasticsearch-plugin install install mapper-size
安装mapper-murmur3, mapper-murmur3插件允许在索引时计算散列,并存储在索引中,以便以后与cardinality聚合使用。
./elasticsearch-plugin install mapper-murmur3
安装elasticsearch-head插件,对于Elasticsearch 5.x:不支持网站插件。作为独立环境运行 安装所需环境
yum install epel-release -y yum install -y nodejs npm git git clone git://github.com/mobz/elasticsearch-head.git cd elasticsearch-head npm install
(如果安装报错,执行npm install -g grunt)
npm run start & 或grunt server也可以启动 打开浏览器地址访问,端口9100
删除数据的话,可以先在数据浏览那里找到要删除的id,然后输入test/longs/id,选择DELETE,此处图省略。
三.LogStash
1.导入yum密钥
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
2.安装LogStash
yum install -y logstash
3.查看安装路径并添加到系统变量
rpm -ql logstash vim /etc/profile export PATH=/usr/share/logstash/bin/:$PATH source /etc/profile
4.测试
logstash -e 'input { stdin { } } output { stdout {} }'
成功后输入hello,world!
测试一下 hello,world!返回值
2017-08-12T21:14:47.617Z localhost.localdomain hello,world!
注: -e 执行操作 input 标准输入 { input } 插件 output 标准输出 { stdout } 插件
5.把logstash的结果输出到elasticsearch中
logstash -e 'input { stdin { } } output { elasticsearch { hosts => ["127.0.0.1:9200"] } }'
6.标准输出和elasticsearch中都保留的情况
logstash -e 'input { stdin { } } output { elasticsearch { hosts => ["127.0.0.1:9200"] } stdout { codec => rubydebug }}'
官方配置指南:https://www.elastic.co/guide/en/logstash/current/configuration.html
7.处理syslog消息
vim /etc/logstash/conf.d/syslog.conf
input { tcp { port => 5000 type => syslog } udp { port => 5000 type => syslog } } filter { if [type] == "syslog" { grok { match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" } add_field => [ "received_at", "%{@timestamp}" ] add_field => [ "received_from", "%{host}" ] } date { match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ] } } } output { elasticsearch { hosts => ["localhost:9200"] } stdout { codec => rubydebug } }
运行配置文件
logstash -f syslog.conf
然后再另一个客户端上telnet ip 5000,比如我的主机ip是192.168.2.30, telnet 192.168.2.30 5000 随便输入内容,就会有日志显示 8.java日志,并把日志是索引按类型存放
vim /etc/logstash/conf.d/file.conf
input { file { path => "/var/log/messages" type => "system" start_position => "beginning" } file { path => "/var/log/elasticsearch/demon.log" type => "es-error" start_position => "beginning" } } output { if [type] == "system" { elasticsearch { hosts => ["localhost:9200"] index => "system-%{+YYYY.MM.dd}" } } if [type] == "es-error" { elasticsearch { hosts => ["localhost:9200"] index => "es-error-%{+YYYY.MM.dd}" } } }
logstash -f /etc/logstash/conf.d/file.conf
9.grok实例
Grok目前是logstash中将最坏的非结构化日志数据解析为结构化和可查询的最佳方式。 此工具非常适用于syslog日志,apache和其他Web服务器日志,mysql日志以及通常为人类而不是计算机消费而编写的任何日志格式。
vim /etc/logstash/conf.d/grok.conf
input { stdin { } filter { grok { match => { "message" => "%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}" } } } output { stdout { codec => "rubydebug" } }
运行成功后,输入 55.3.244.1 GET /index.html 15824 0.043
10.多行匹配实例
vim /etc/logstash/conf.d/mutiline.conf
input { stdin { codec => multiline { pattern => "^\[" negate => "true" what => "previous" } } } output { stdout { codec => "rubydebug" } }
11.利用logstash做nginx的访问日志分析
安装nginx
yum install -y nginx
编辑nginx.conf配置文件,日志保存成json格式
vim /etc/nginx/nginx.conf
注释掉http段中的access_log 添加如下内容
log_format json '{"@timestamp":"$time_iso8601",' '"@version":"1",' '"client":"$remote_addr",' '"url":"$uri",' '"status":"$status",' '"domian":"$host",' '"host":"$server_addr",' '"size":"$body_bytes_sent",' '"responsetime":"$request_time",' '"referer":"$http_referer",' '"ua":"$http_user_agent"' ' }';
在server段中添加访问日志以json格式保存 access_log /var/log/nginx/access_json.log json;
检测nginx配置文件
nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful
无报错启动服务
systemctl start nginx
测试nginx服务
curl -i http://127.0.0.1
实时监测一下nginx访问日志
tail -f /var/log/nginx/access_json.log
创建logstash的nginx.conf
配置文件
vim /etc/logstash/cond.d/nginx.conf
input { file { path => "/var/log/nginx/access_json.log" codec => json } } output { stdout { codec => "rubydebug" } }
12.利用logstash做syslog的日志分析
修改rsyslog.conf配置文件
vim /etc/rsyslog.conf
开启以下配置,并配置为本机ip
#*.* @@remote-host:514修改为 *.* @@127.0.0.1:514
重启rsyslog服务
systemctl restart rsyslog
创建syslog.conf配置文件
vim /etc/logstash/conf.d/system-syslog.conf
input { syslog { type => "system-syslog" host => "192.168.1.121" port => "514" } } output { stdout { codec => "rubydebug" } }
运行配置文件
logstash -f system-syslog.conf
创建完整版logstash配置文件
vim /etc/logstash/conf.d/all.conf
input { syslog { type => "system-syslog" host => "127.0.0.1" port => "514" } file { path => "/var/log/messages" type => "system" start_position => "beginning" } file { path => "/var/log/elasticsearch/longs.log" type => "es-error" start_position => "beginning" codec => multiline { pattern => "^\[" negate => "true" what => "previous" } } file { path => "/var/log/nginx/access_json.log" codec => json type => "nginx-log" start_position => "beginning" } } output { if [type] == "system" { elasticsearch { hosts => ["localhost:9200"] index => "system-%{+YYYY.MM.dd}" } } if [type] == "es-error" { elasticsearch { hosts => ["localhost:9200"] index => "es-error-%{+YYYY.MM.dd}" } } if [type] == "nginx-log" { elasticsearch { hosts => ["localhost:9200"] index => "nginx-log-%{+YYYY.MM.dd}" } } if [type] == "system-syslog" { elasticsearch { hosts => ["localhost:9200"] index => "system-syslog-%{+YYYY.MM.dd}" } } }
运行logstash中的all.conf配置文件
logstash -f /etc/logstash/conf.d/all.conf &
三.kibana
安装kibana
yum install -y kibana
修改配置文件
vim /etc/kibana/kibana.yml
server.port: 5601 server.host: "0.0.0.0" elasticsearch.url: "http://localhost:9200"
安装screen,以便kibana后台运行
yum install -y screen screen
启动服务
systemctl start kibana
开机启动
systemctl enable kibana
测试kibana是否成功
curl -i http://127.0.0.1:5601
配置动态效果图
创建饼图 创建Nginx独立ip访问
Nginx页面访问次数统计
Nginx页面访问总次数
Nginx访问时间波动图
把所有图放到仪表盘中
四.redis
1.安装redis
yum install -y redis
2.修改配置文件,后台启动
vim /etc/redis.conf
把daemonize no改为daemonize yes
3.启动redis
systemctl start redis systemctl enable redis
4.测试redis
redis-cli 127.0.0.1:6379> set a long OK 127.0.0.1:6379> save OK 127.0.0.1:6379>
5.编辑logstash文件
vim /etc/logstash/conf.d/redis-out.conf
input { stdin {} } output { redis { host => "127.0.0.1" port => "6379" db => '6' data_type => "list" key => 'demo' } }
运行logstash
logstash -f /etc/logstash/conf.d/redis-out.conf
运行成功
hello,redis
开启另一个终端 redis-cli
select 6 keys * LINDEX demo -1 redis返回值 "{\"@timestamp\":\"2017-08-13T10:52:49.014Z\",\"@version\":\"1\",\"host\":\"localhost.localdomain\",\"message\":\"hell.redis\"}"
6.把所有日志监控的来源文件都存放到redis中
vim /etc/logstash/conf.d/test.conf
input { syslog { type => "system-syslog" host => "127.0.0.1" port => "514" } file { path => "/var/log/messages" type => "system" start_position => "beginning" } file { path => "/var/log/elasticsearch/longs.log" type => "es-error" start_position => "beginning" codec => multiline { pattern => "^\[" negate => "true" what => "previous" } } file { path => "/var/log/nginx/access_json.log" codec => json type => "nginx-log" start_position => "beginning" } } output { if [type] == "system" { redis { host => "127.0.0.1" port => "6379" db => '6' data_type => "list" key => "system" } } if [type] == "es-error" { redis { host => "127.0.0.1" port => "6379" db => '6' data_type => "list" key => "es-error" } } if [type] == "nginx-log" { redis { host => "127.0.0.1" port => "6379" db => '6' data_type => "list" key => "nginx-log" } } if [type] == "system-syslog" { redis { host => "127.0.0.1" port => "6379" db => '6' data_type => "list" key => "system-syslog" } } }
7.开启另一个终端运行
redis-cli 127.0.0.1:6379[6]> SELECT 6 OK 127.0.0.1:6379[6]> KEYS * 1) "system" 2) "nginx-log" 3) "demo"
8.把redis数据读取出来,写入到elasticsearch中(需另一台主机做测试,这里我在本机测试一下,看看效果,不过无意义) 先在后台运行test.conf,然后修改logstash.yml中path.data。要不然启动不成功,提示数据文件已占用
vim /etc/logstash/conf.d/testout.conf
input { redis { type => "system" host => "127.0.0.1" port => "6379" db => "6" data_type => "list" key => 'system' } redis { type => "es-error" host => "127.0.0.1" port => "6379" db => "6" data_type => "list" key => 'es-error' } redis { type => "nginx-log" host => "127.0.0.1" port => "6379" db => "6" data_type => "list" key => 'nginx-log' } redis { type => "system-syslog" host => "127.0.0.1" port => "6379" db => "6" data_type => "list" key => 'system-syslog' } } output { if [type] == "system" { elasticsearch { hosts => ["127.0.0.1:9200"] index => "system-%{+YYYY.MM.dd}" } } if [type] == "es-error" { elasticsearch { hosts => ["127.0.0.1:9200"] index => "es-error-%{+YYYY.MM.dd}" } } if [type] == "nginx-log" { elasticsearch { hosts => ["127.0.0.1:9200"] index => "nginx-log-%{+YYYY.MM.dd}" } } if [type] == "system-syslog" { elasticsearch { hosts => ["127.0.0.1:9200"] index => "system-syslog-%{+YYYY.MM.dd}" } } }
logstash -f /etc/logstash/conf.d/testout.conf
9.登录kibana上,有数据,说明这个配置文件无问题,只不过这样在本机上做测试无意义,不建议这么做 以上大部分内容转自安神