使用ELK(Elasticsearch + Logstash + Kibana) 搭建日志集中分析平臺實踐
前言
Elasticsearch + Logstash + Kibana(ELK)是一套開源的日志管理方案,分析網站的訪問情況時我們一般會借助Google/百度/CNZZ等方式嵌入JS做數據統計,但是當網 站訪問異常或者被攻擊時我們需要在后臺分析如Nginx的具體日志,而Nginx日志分割/GoAccess/Awstats都是相對簡單的單節點解決方 案,針對分布式集群或者數據量級較大時會顯得心有余而力不足,而ELK的出現可以使我們從容面對新的挑戰。
- Logstash:負責日志的收集,處理和儲存
- Elasticsearch:負責日志檢索和分析
- Kibana:負責日志的可視化 </ul>
- plugins/dashboard/index
- plugins/discover/index
- plugins/doc/index
- plugins/kibana/index
- plugins/markdown_vis/index
- plugins/metric_vis/index
- plugins/settings/index
- plugins/table_vis/index
- plugins/vis_types/index
- plugins/visualize/index</pre>
JVM調優
#修改elasticsearch.in.sh vi /usr/share/elasticsearch/bin/elasticsearch.in.sh
ELK(Elasticsearch + Logstash + Kibana)
</div>更新記錄
2015年08月31日 - 初稿
閱讀原文 - http://wsgzao.github.io/post/elk/
擴展閱讀
CentOS 7.x安裝ELK(Elasticsearch+Logstash+Kibana) - http://www.chenshake.com/centos-install-7-x-elk-elasticsearchlogstashkibana/
Centos 6.5 安裝nginx日志分析系統 elasticsearch + logstash + redis + kibana - http://blog.chinaunix.net/xmlrpc.php?r=blog/article&uid=17291169&id=4898582
logstash-forwarder and grok examples - https://www.ulyaoth.net/threads/logstash-forwarder-and-grok-examples.32413/
三斗室 - http://chenlinux.com/
elastic - https://www.elastic.co/guide
LTMP索引 - http://wsgzao.github.io/index/#LTMP
</div>組件預覽
JDK - http://www.oracle.com/technetwork/java/javase/downloads/index.html
Elasticsearch - https://www.elastic.co/downloads/elasticsearch
Logstash - https://www.elastic.co/downloads/logstash
Kibana - https://www.elastic.co/downloads/kibana
redis - http://redis.io/download
</div>設置FQDN
創建SSL證書的時候需要配置FQDN
</div>#修改hostname cat /etc/hostname elk修改hosts
cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
127.0.0.1 10-10-87-19 10.10.87.19 elk.ooxx.com elk
刷新環境
hostname -F /etc/hostname
復查結果
hostname -f elk.ooxx.com
hostname elk</pre>
服務端
Java
cat /etc/redhat-release CentOS release 6.5 (Final)yum install java-1.7.0-openjdk java -version
java version "1.7.0_85" OpenJDK Runtime Environment (rhel-2.6.1.3.el6_6-x86_64 u85-b01) OpenJDK 64-Bit Server VM (build 24.85-b03, mixed mode)
···
Elasticsearch
``` bash
下載安裝
wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.1.noarch.rpm yum localinstall elasticsearch-1.7.1.noarch.rpm
啟動相關服務
service elasticsearch start service elasticsearch status
查看Elasticsearch的配置文件
rpm -qc elasticsearch
/etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/logging.yml /etc/init.d/elasticsearch /etc/sysconfig/elasticsearch /usr/lib/sysctl.d/elasticsearch.conf /usr/lib/systemd/system/elasticsearch.service /usr/lib/tmpfiles.d/elasticsearch.conf
查看端口使用情況
netstat -nltp
Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:9200 0.0.0.0: LISTEN 1765/java
tcp 0 0 0.0.0.0:9300 0.0.0.0: LISTEN 1765/java
tcp 0 0 0.0.0.0:22 0.0.0.0: LISTEN 1509/sshd
tcp 0 0 :::22 ::: LISTEN 1509/sshd測試訪問
curl -X GET http://localhost:9200/</pre>
Kibana
#下載tar包 wget https://download.elastic.co/kibana/kibana/kibana-4.1.1-linux-x64.tar.gz解壓
tar zxf kibana-4.1.1-linux-x64.tar.gz -C /usr/local/ cd /usr/local/ mv kibana-4.1.1-linux-x64 kibana
創建kibana服務
vi /etc/rc.d/init.d/kibana
!/bin/bash
BEGIN INIT INFO
Provides: kibana
Default-Start: 2 3 4 5
Default-Stop: 0 1 6
Short-Description: Runs kibana daemon
Description: Runs the kibana daemon as a non-root user
END INIT INFO
Process name
NAME=kibana DESC="Kibana4" PROG="/etc/init.d/kibana"
Configure location of Kibana bin
KIBANA_BIN=/usr/local/kibana/bin
PID Info
PID_FOLDER=/var/run/kibana/ PID_FILE=/var/run/kibana/$NAME.pid LOCK_FILE=/var/lock/subsys/$NAME PATH=/bin:/usr/bin:/sbin:/usr/sbin:$KIBANA_BIN DAEMON=$KIBANA_BIN/$NAME
Configure User to run daemon process
DAEMON_USER=root
Configure logging location
KIBANA_LOG=/var/log/kibana.log
Begin Script
RETVAL=0
if [
id -u
-ne 0 ]; then echo "You need root privileges to run this script" exit 1 fiFunction library
. /etc/init.d/functions
start() { echo -n "Starting $DESC : "
pid=
pidofproc -p $PID_FILE kibana
if [ -n "$pid" ] ; then echo "Already running." exit 0 else# Start Daemon
if [ ! -d "$PID_FOLDER" ] ; then mkdir $PID_FOLDER fi daemon --user=$DAEMON_USER --pidfile=$PID_FILE $DAEMON 1>"$KIBANA_LOG" 2>&1 & sleep 2 pidofproc node > $PID_FILE RETVAL=$? [[ $? -eq 0 ]] && success || failure echo [ $RETVAL = 0 ] && touch $LOCK_FILE return $RETVAL fi }
reload() { echo "Reload command is not implemented for this service." return $RETVAL }
stop() { echo -n "Stopping $DESC : " killproc -p $PID_FILE $DAEMON RETVAL=$? echo [ $RETVAL = 0 ] && rm -f $PID_FILE $LOCK_FILE }
case "$1" in start) start ;; stop) stop ;; status) status -p $PID_FILE $DAEMON RETVAL=$? ;; restart) stop start ;; reload) reload ;; *)
Invalid Arguments, print the following message.
echo "Usage: $0 {start|stop|status|restart}" >&2
exit 2 ;; esac
修改啟動權限
chmod +x /etc/rc.d/init.d/kibana
啟動kibana服務
service kibana start service kibana status
查看端口
netstat -nltp
Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:9200 0.0.0.0: LISTEN 1765/java
tcp 0 0 0.0.0.0:9300 0.0.0.0: LISTEN 1765/java
tcp 0 0 0.0.0.0:22 0.0.0.0: LISTEN 1509/sshd
tcp 0 0 0.0.0.0:5601 0.0.0.0: LISTEN 1876/node
tcp 0 0 :::22 :::* LISTEN 1509/sshd</pre>Logstash
#下載rpm包 wget https://download.elastic.co/logstash/logstash/packages/centos/logstash-1.5.4-1.noarch.rpm安裝
yum localinstall logstash-1.5.4-1.noarch.rpm
設置ssl,之前設置的FQDN是elk.ooxx.com
cd /etc/pki/tls
openssl req -x509 -batch -nodes -newkey rsa:2048 -keyout lumberjack.key -out lumberjack.crt -subj /CN=logstash.example.com
openssl req -subj '/CN=elk.ooxx.com/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
創建一個01-logstash-initial.conf文件
cat > /etc/logstash/conf.d/01-logstash-initial.conf << EOF input { lumberjack { port => 5000 type => "logs" ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt" ssl_key => "/etc/pki/tls/private/logstash-forwarder.key" } }
filter { if [type] == "syslog" { grok { match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" } add_field => [ "received_at", "%{@timestamp}" ] add_field => [ "received_from", "%{host}" ] } syslog_pri { } date { match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ] } } }
output { elasticsearch { host => localhost } stdout { codec => rubydebug } } EOF
啟動logstash服務
service logstash start service logstash status
查看5000端口
netstat -nltp
Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:9200 0.0.0.0: LISTEN 1765/java
tcp 0 0 0.0.0.0:9300 0.0.0.0: LISTEN 1765/java
tcp 0 0 0.0.0.0:9301 0.0.0.0: LISTEN 2309/java
tcp 0 0 0.0.0.0:22 0.0.0.0: LISTEN 1509/sshd
tcp 0 0 0.0.0.0:5601 0.0.0.0: LISTEN 1876/node
tcp 0 0 0.0.0.0:5000 0.0.0.0: LISTEN 2309/java
tcp 0 0 :::22 :::* LISTEN 1509/sshd啟動服務
service logstash-forwarder start service logstash-forwarder status
訪問Kibana,Time-field name 選擇 @timestamp
增加節點和客戶端配置一樣,注意同步證書
/etc/pki/tls/certs/logstash-forwarder.crt</pre>
客戶端
Logstash Forwarder
#登陸到客戶端,安裝Logstash Forwarder wget https://download.elastic.co/logstash-forwarder/binaries/logstash-forwarder-0.4.0-1.x86_64.rpm yum localinstall logstash-forwarder-0.4.0-1.x86_64.rpm查看logstash-forwarder的配置文件位置
rpm -qc logstash-forwarder /etc/logstash-forwarder.conf
備份配置文件
cp /etc/logstash-forwarder.conf /etc/logstash-forwarder.conf.save
編輯 /etc/logstash-forwarder.conf,需要根據實際情況進行修改
cat > /etc/logstash-forwarder.conf << EOF { "network": { "servers": [ "elk.ooxx.com:5000" ],
"ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt", "timeout": 15
},
"files": [ { "paths": [ "/var/log/messages", "/var/log/secure" ],
"fields": { "type": "syslog" } }
] } EOF</pre>
配置Nginx日志策略
#修改客戶端配置 vi /etc/logstash-forwarder.conf{ "network": { "servers": [ "elk.ooxx.com:5000" ],
"ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt", "timeout": 15
},
"files": [ { "paths": [ "/var/log/messages", "/var/log/secure" ], "fields": { "type": "syslog" } }, { "paths": [ "/app/local/nginx/logs/access.log" ], "fields": { "type": "nginx" } } ] }
服務端增加patterns
mkdir /opt/logstash/patterns vi /opt/logstash/patterns/nginx
NGUSERNAME [a-zA-Z.\@-+_%]+ NGUSER %{NGUSERNAME} NGINXACCESS %{IPORHOST:remote_addr} - - [%{HTTPDATE:time_local}] "%{WORD:method} %{URIPATH:path}(?:%{URIPARAM:param})? HTTP/%{NUMBER:httpversion}" %{INT:status} %{INT:body_bytes_sent} %{QS:http_referer} %{QS:http_user_agent}
官網pattern的debug在線工具
https://grokdebug.herokuapp.com/
修改logstash權限
chown -R logstash:logstash /opt/logstash/patterns
修改服務端配置
vi /etc/logstash/conf.d/01-logstash-initial.conf
input { lumberjack { port => 5000 type => "logs" ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt" ssl_key => "/etc/pki/tls/private/logstash-forwarder.key" } }
filter { if [type] == "syslog" { grok { match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" } add_field => [ "received_at", "%{@timestamp}" ] add_field => [ "received_from", "%{host}" ] } syslog_pri { } date { match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ] } } if [type] == "nginx" { grok { match => { "message" => "%{NGINXACCESS}" } } } }
output { elasticsearch { host => localhost } stdout { codec => rubydebug } }</pre>
其它注意事項
修改kibana端口
#編輯kibana.yaml vi /usr/local/kibana/config/kibana.ymlKibana is served by a back end server. This controls which port to use.
port: 5601
port: 80
The host to bind the server to.
host: "0.0.0.0"
The Elasticsearch instance to use for all your queries.
elasticsearch_url: "http://localhost:9200"
preserve_elasticsearch_host true will send the hostname specified in
elasticsearch
. If you set it to false,then the host you use to connect to this Kibana instance will be sent.
elasticsearch_preserve_host: true
Kibana uses an index in Elasticsearch to store saved searches, visualizations
and dashboards. It will create a new index if it doesn't already exist.
kibana_index: ".kibana"
If your Elasticsearch is protected with basic auth, this is the user credentials
used by the Kibana server to perform maintence on the kibana_index at statup. Your Kibana
users will still need to authenticate with Elasticsearch (which is proxied thorugh
the Kibana server)
kibana_elasticsearch_username: user
kibana_elasticsearch_password: pass
If your Elasticsearch requires client certificate and key
kibana_elasticsearch_client_crt: /path/to/your/client.crt
kibana_elasticsearch_client_key: /path/to/your/client.key
If you need to provide a CA certificate for your Elasticsarech instance, put
the path of the pem file here.
ca: /path/to/your/CA.pem
The default application to load.
default_app_id: "discover"
Time in milliseconds to wait for elasticsearch to respond to pings, defaults to
request_timeout setting
ping_timeout: 1500
Time in milliseconds to wait for responses from the back end or elasticsearch.
This must be > 0
request_timeout: 300000
Time in milliseconds for Elasticsearch to wait for responses from shards.
Set to 0 to disable.
shard_timeout: 0
Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying
startup_timeout: 5000
Set to false to have a complete disregard for the validity of the SSL
certificate.
verify_ssl: true
SSL for outgoing requests from the Kibana Server (PEM formatted)
ssl_key_file: /path/to/your/server.key
ssl_cert_file: /path/to/your/server.crt
Set the path to where you would like the process id file to be created.
pid_file: /var/run/kibana.pid
If you would like to send the log output to a file you can set the path below.
This will also turn off the STDOUT log output.
log_file: ./kibana.log
Plugins that are included in the build, and no longer found in the plugins/ folder
bundled_plugin_ids:
if [ "x$ES_MIN_MEM" = "x" ]; then
ES_MIN_MEM=1g
fi
if [ "x$ES_MAX_MEM" = "x" ]; then
ES_MAX_MEM=1g</pre>