用ElasticSearch,LogStash,Kibana搭建實時日志收集系統

jopen 9年前發布 | 40K 次閱讀 日志處理

介紹

  • 這套系統,logstash負責收集處理日志文件內容存儲到elasticsearch搜索引擎數據庫中。kibana負責查詢elasticsearch并在web中展示。
  • logstash收集進程收獲日志文件內容后,先輸出到redis中緩存,另一logstash處理進程從redis中讀出并轉存到elasticsearch中,以解決讀快寫慢速度不一致問題。
  • 官方在線文檔:https://www.elastic.co/guide/index.html
  • </ul>

    一、安裝jdk7

    • ElasticSearch,LogStash均是java程序,所以需要jdk環境。
      需要注意的是,多節點通訊,必須保證JDK版本一致,不然可能會導致連接失敗。

    • 下載:jdk-7u71-linux-x64.rpm
      http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html

    • rpm -ivh jdk-7u71-linux-x64.rpm

    • 配置JDK
      編輯/etc/profile文件,在開頭加入:

      export JAVA_HOME=/usr/java/jdk1.7.0_71
      export JRE_HOME=$JAVA_HOME/jre
      export CLASSPATH=$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH
      export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
    • 檢查JDK環境
      使用source /etc/profile命令,使環境變量立即生效。
      查看當前安裝的JDK版本,命令:java -version
      檢查環境變量,echo $PATH

    二、安裝elasticsearch

    bootstrap.mlockall: true
    
    index.number_of_shards: 1
    index.number_of_replicas: 0
    
    #index.translog.flush_threshold_ops: 100000
    #index.refresh_interval: -1
    index.translog.flush_threshold_ops: 5000
    index.refresh_interval: 1  
    
    network.bind_host: 172.16.18.114     
    
    #節點間通訊發布到其它節點的IP地址
    #如果不設置由ES自己決定它可能會發現一個地址,但是其它節點可能訪問不了,這樣節點間通訊將失敗                                
    network.publish_host: 172.16.18.114                                          
    
    # Security 允許所有http請求
    http.cors.enabled: true
    http.cors.allow-origin: "/.*/"    
    • 修改bin/elasticsearch文件
    # 使jvm使用os,max-open-files
    es_parms="-Delasticsearch -Des.max-open-files=ture"
    
    # Start up the service
    # 修改OS打開最大文件數
    ulimit -n 1000000
    ulimit -l unlimited
    launch_service "$pidfile" "$daemonized" "$properties"
    • 修改bin/elasticsearch.in.sh文件
    ......
    
    if [ "x$ES_MIN_MEM" = "x" ]; then
        ES_MIN_MEM=256m
    fi
    if [ "x$ES_MAX_MEM" = "x" ]; then
        ES_MAX_MEM=1g
    fi
    if [ "x$ES_HEAP_SIZE" != "x" ]; then
        ES_MIN_MEM=$ES_HEAP_SIZE
        ES_MAX_MEM=$ES_HEAP_SIZE
    fi
    
    #set min memory as 2g
    ES_MIN_MEM=2g
    #set max memory as 2g
    ES_MAX_MEM=2g
    
    ......
    • 運行
      ./bin/elasticsearch -d
      ./logs下為日志文件

    • 檢查節點狀態
      curl -XGET ‘http://localhost:9200/_nodes?os=true&process=true&pretty=true

      {
        "cluster_name" : "elasticsearch",
        "nodes" : {
          "7PEaZbvxToCL2O2KuMGRYQ" : {
            "name" : "Gertrude Yorkes",
            "transport_address" : "inet[/172.16.18.116:9300]",
            "host" : "casimbak",
            "ip" : "172.16.18.116",
            "version" : "1.4.4",
            "build" : "c88f77f",
            "http_address" : "inet[/172.16.18.116:9200]",
            "settings" : {
              "index": {
                  "number_of_replicas": "0",
                  "translog": {
                      "flush_threshold_ops": "5000"
                  },
                  "number_of_shards": "1",
                  "refresh_interval": "1"
              },      
              "path" : {
                "logs" : "/home/jfy/soft/elasticsearch-1.4.4/logs",
                "home" : "/home/jfy/soft/elasticsearch-1.4.4"
              },
              "cluster" : {
                "name" : "elasticsearch"
              },
              "bootstrap" : {
                "mlockall" : "true"
              },
              "client" : {
                "type" : "node"
              },
              "http" : {
                "cors" : {
                  "enabled" : "true",
                  "allow-origin" : "/.*/"
                }
              },
              "foreground" : "yes",
              "name" : "Gertrude Yorkes",
              "max-open-files" : "ture"
            },
            "process" : {
              "refresh_interval_in_millis" : 1000,
              "id" : 13896,
              "max_file_descriptors" : 1000000,
              "mlockall" : true
            },
      
            ...
      
          }
        }
      }
    • 表明ElasticSearch已運行,狀態與配置相符

              "index": {
                  "number_of_replicas": "0",
                  "translog": {
                      "flush_threshold_ops": "5000"
                  },
                  "number_of_shards": "1",
                  "refresh_interval": "1"
              }, 
      
            "process" : {
              "refresh_interval_in_millis" : 1000,
              "id" : 13896,
              "max_file_descriptors" : 1000000,
              "mlockall" : true
            },
    • 安裝head插件操作elasticsearch
      elasticsearch/bin/plugin -install mobz/elasticsearch-head
      http://172.16.18.116:9200/_plugin/head/

    • 安裝marvel插件監控elasticsearch狀態
      elasticsearch/bin/plugin -i elasticsearch/marvel/latest
      http://172.16.18.116:9200/_plugin/marvel/

    三、安裝logstash

    • logstash一個日志收集處理過濾程序。

    • LogStash分為日志收集端進程和日志處理端進程,收集端負責收集多個日志文件實時的將日志內容輸出到redis隊列緩存,處理端負責將redis隊列緩存中的內容輸出到ElasticSarch中存儲。收集端進程運行在產生日志文件的服務器上,處理端進程運行在redis,elasticsearch同一服務器上。

    • 下載
      wget https://download.elasticsearch.org/logstash/logstash/logstash-1.4.2.tar.gz

    • redis安裝配置略,但要注意監控redis隊列長度,如果長時間堆集說明elasticsearch出問題了
      每2S檢查一下redis中數據列表長度,100次
      redis-cli -r 100 -i 2 llen logstash:redis

    • 配置Logstash日志收集進程
      vi ./lib/logstash/config/shipper.conf

    input {
        #file {
        #    type => "mysql_log"
        #    path => "/usr/local/mysql/data/localhost.log"
        #    codec => plain{
        #        charset => "GBK"
        #    }
        #}
        file {
            type => "hostapd_log"
            path => "/root/hostapd/hostapd.log"
            sincedb_path => "/home/jfy/soft/logstash-1.4.2/sincedb_hostapd.access"
            #start_position => "beginning"
            #http://logstash.net/docs/1.4.2/codecs/plain
            codec => plain{
                charset => "GBK"
            }
        }
        file {
            type => "hkt_log"
            path => "/usr1/app/log/bsapp.tr"
            sincedb_path => "/home/jfy/soft/logstash-1.4.2/sincedb_hkt.access"
            start_position => "beginning"
            codec => plain{
                charset => "GBK"
            }
        }
    #   stdin {
    #       type => "hostapd_log"
    #   }
    }
    
    #filter {
    #    grep {
    #        match => [ "@message", "mysql|GET|error" ]
    #    }
    #}
    
    output {
        redis {
            host => '172.16.18.116'
            data_type => 'list'
            key => 'logstash:redis'
    #        codec => plain{
    #            charset => "UTF-8"
    #        }
        }
    #    elasticsearch {
    #      #embedded => true
    #      host => "172.16.18.116"
    #    }
    }
    • 運行收集端進程
      ./bin/logstash agent -f ./lib/logstash/config/shipper.conf

    • 配置Logstash日志處理進程
      vi ./lib/logstash/config/indexer.conf

      input {
        redis {
          host => '127.0.0.1'
          data_type => 'list'
          key => 'logstash:redis'
          #threads => 10
          #batch_count => 1000
        }
      }
      
      output {
        elasticsearch {
          #embedded => true
          host => localhost
          #workers => 10
        }
      }
    • 運行處理端進程
      ./bin/logstash agent -f ./lib/logstash/config/indexer.conf
      處理端從redis讀出緩存的日志內容,輸出到ElasticSarch中存儲

    四、安裝kibana

    • kibana是elasticsearch搜索引擎的web展示界面,一套在webserver下的js腳本,可以定制復雜的查詢過濾條件檢索elasticsearch,并以多種方式(表格,圖表)展示。

    • 下載
      wget https://download.elasticsearch.org/kibana/kibana/kibana-3.1.2.tar.gz
      解壓后將kibana目錄放到webserver能訪問到的地方

    • 配置
      修改kibana/config.js:

    如果kibana與elasticsearch不在同一機器則修改:
    elasticsearch: "http://192.168.91.128:9200",
    #這里實際上是瀏覽器直接訪問該地址連接elasticsearch
    
    否則默認,一定不要修改

    如果出現connection failed,則修改elasticsearch/config/elasticsearch.yml,增加:

    http.cors.enabled: true  
    http.cors.allow-origin: "/.*/"

    具體含義參見:
    http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-http.html

來自: http://blog.csdn.net/jiao_fuyou/article/details/46694125

 本文由用戶 jopen 自行上傳分享,僅供網友學習交流。所有權歸原作者,若您的權利被侵害,請聯系管理員。
 轉載本站原創文章,請注明出處,并保留原始鏈接、圖片水印。
 本站是一個以用戶分享為主的開源技術平臺,歡迎各類分享!