如何構建基于容器的本機監控系統

jopen 9年前發布 | 15K 次閱讀 容器


Docker 目前非常火爆,如何把Docker使用起來,并且和日常工作結合起來,是需要考慮的一個問題。本文意圖將一個測試的具體步驟展示給大家,可以在一臺內存較 大的臺式機上進行(建議16GB內存),另外本文的參考意義更加在于嘗試和使用,對于實際場景的意義則需要方家討論。

測試拓撲圖如下所示:

如何構建基于容器的本機監控系統

各個功能Docker模塊功能如下:

? Flume:負責搜集日志信息(本文中啟動了三個flume容器)

o 第一個負責從本機搜集/var/log/messages日志,直接發送到elasticsearch中

o 第二個負責從本機搜集/var/log/messages日志,發送到kafka中間件,讀取日志序列,發送到elasticsearch

o 第三個負責從kafka讀取日志序列,發送到elasticsearch。

o 還有一個沒有實現的可能性,從kafka讀取日志序列,寫入HDFS,以便后續進行hadoop分析

? docker ps的輸出則是真正運行在CentOS7中的容器集合,共同完成以上任務。

Config docker hub repository accelerator to Daocloud

http://dashboard.daocloud.io/mirror

For CentOS:

? sudo sed -i 's|other_args=|other_args=--registry-mirror= http://4c5cf935.m.daocloud.io |g' /etc/sysconfig/docker

? sudo sed -i "s|OPTIONS='|OPTIONS='--registry-mirror= http://4c5cf935.m.daocloud.io |g" /etc/sysconfig/docker

? sudo service docker restart

Install docker

? https://docs.docker.com/installation/centos

? yum –y update <make sure kernel >= 3.10.0-229.el7.x86_64>

? crul –sSL https://get.docker/com/ | sh

o this script adds the ‘docker.repo’ repository and installs Docker

? yum –y install docker-selinux

? systemctl start docker.service

Install docker-compose

? https://docs.docker.com/compose/install

? curl -L https://github.com/docker/comp ... ose-X 28X-uname -m> /usr/local/bin/docker-compose

? chmod +x /usr/local/bin/docker-compose

docker-compose kafka

? https://github.com/wurstmeister/kafka-docker

? under the kafka-docker-master directory:

o modify the KAFKA_ADVERTISED_HOST_NAME in docker-compose.yml to match your docker host IP (Note: Do not use localhost or 127.0.0.1 as the host ip if you want to run multiple brokers.)

o start a cluster : #docker-compute up –d

o Add user broker: docker-compose scale kafka=2<-(no less than replication factor below)

o Destroy a cluster: docker-compose stop

o Monitor the logs: docker-compose logs

o To see the containers IP and ports:

? Systemctl status docker.service

o ./start-kafka-shell <docker_host_ip> <zk_host:zk_port>

o <container1># $KAFKA_HOME/bin/kafka-topics.sh –create –topic topic –partitions 4 –zookeeper $ZK –replication-factor 2←(must equal to kafka broker’s #)

o <container1># $KAFKA_HOME/bin/kafka-topics.sh –list –zookeeper $ZK

o <container1># $KAFKA_HOME/bin/kafka-topics.sh –describe –topic topic –zookeeper $ZK

o <container1># $KAFKA_HOME/bin/kafka-console-producer.sh –topic=topic –broker-list=broker-list.sh

o ./start-kafka-shell <docker_host_ip> <zk_host:zk_port>

o <container2># $KAFKA_HOME/bin/kafka-console-consumer.sh –topic=topic –zookeeper=$ZK –from-beginning

o troubleshooting: http://wurstmeister.github.io/kafka-docker/

配置elasticsearch

? docker pull elasticsearch:latest

? mkdir /mnt/isilon

? mount isilon.mini:/ifs/hdfs /mnt/isilon

? docker run –d –p 9200:9200 –p 9300:9300 –v /mnt/isilon/elasticsearch:/data –v /mnt/isilon/elasticsearch/conf/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml elasticsearch

o <會將以上目錄和文件掛載到container內部;>

o 起作用的配置文件在:/usr/share/elasticsearch/config目錄下

o 一臺物理機,只能啟動一個elasticsearch容器

? systemctl status docker.service <to get the elasticsearch IP>

? http://<elasticsearch_i p>:9200 or http://<host_I P>:9200

? docker exec –it <elasticsearch_container_ID> /bin/bash

o cd /usr/share/elasticsearch/plugins

o /usr/share/elasticsearch/bin/plugin –install mobz/elasticsearch-head

o /usr/share/elasticsearch/bin/plugin –install lukas-vlcek/bigdesk

o http://<host_I P>:9200/_plugin/bigdesk

o http://<host_I P>:9200/_plugin/head

? docker exec –it <es_container_ID> /bin/bash

? cp –r /usr/share/elasticsearch/lib/* /data/lib

o used for later flume library to access elasticsearch

配置kibana

? docker pull kibana

? docker run - -link <elasticsearch_container_name> -d kibana

o 默認方式,5601端口只在container內部可用

? docker run - -link <elasticsearch_container_name> -d kibana - -plugins /somewhere/else

o 可以傳遞某些參數

? docker run - -name kibana - -link <elasticsearch_container_name> -p 5601:5601 –d kibana

o 對外輸出5601端口,可以通過主機IP訪問,但是有可能對elasticsearch提供服務的主機名localhost解析不了,造成問題。建議用下一種方式

? docker run - -name kibana –e http://<host_I P>:9200 -p 5601:5601 –d kibana

o netstat –tupln | grep 5601

o docker logs <kibana_container_ID>

o http://<host_I P>:5601

配置flume-----監控文件日志輸出到elasticsearch

? docker pull probablyfine/flume <最新flume-ng為1.6.0版本>

? cat /mnt/isilon/config/flume_log2es.conf

o refer to 《配置ELK》文章中,配置flume-ng一節,第12A步配置

? docker run -e FLUME_AGENT_NAME=log2es -v /mnt/isilon:/data –v /var/log/messages:/var/log/messages -e FLUME_CONF_FILE=/data/config/flume_log2es.conf -d probablyfine/flume

o 為調試起見將host機器/var/log/messages掛入flume容器中

o flume_log2es.conf中監控/var/log/messages的變化,可以修改為容器內任何感興趣的日志

? docker exec –it <flume_container_ID> /bin/bash

o cp –r /data/elasticsearch/lib/* /opt/flume/lib

? copy previous elasticsearch lib files to flume lib

? docker logs <flume_container_ID>

o docker stop <flume_container_ID>

o docker start <flume_container_ID> 即可,再用logs命令看狀態

配置flume-----監控日志文件輸出到kafka

? docker pull probablyfine/flume

? yum –y install maven

o to create flume-ng-kafka library

? https://github.com/jinoos/flume-ng-extends , download the zip file

? unzip flume-ng-extends-source-master.zip

? cd flume-ng-extends-source-master

? mvn clean packages

? mkdir –p /mnt/isilon/kafka/lib

? cp target/flume-ng-extends-source-0.8.0.jar /mnt/isilon/kafka/lib

? cat /mnt/isilon/config/flume_kafka_producer.conf

o refer to 《配置ELK》文章中,配置flume-ng一節,第12B步配置

? docker run -e FLUME_AGENT_NAME=kfk_pro -v /mnt/isilon:/data -v /var/log/messages:/var/log/messages -e FLUME_CONF_FILE=/data/config/flume_kafka_producer.conf -d probablyfine/flume

? docker exec –it <kfk_pro_container_ID> /bin/bash

o cp –r /data/elasticsearch/lib/* /opt/flume/lib

o cp –r /data/kafka/lib/* /opt/flume/lib

? docker stop <kfk_pro_container_ID>

? docker start <kfk_pro_container_ID>

o docker logs <kfk_pro_container_ID>

o refer to “docker-compose kafka” about topic list and consumer command

? cat /mnt/isilon/config/flume_kafka_consumer.conf

o refer to 《配置ELK》文章中,配置flume-ng一節,第12C步配置

? docker run -e FLUME_AGENT_NAME=kfk_con -v /mnt/isilon:/data -e FLUME_CONF_FILE=/data/config/flume_kafka_consumer.conf -d probablyfine/flume

? docker exec –it <kfk_con_container_ID> /bin/bash

o cp –r /data/elasticsearch/lib/* /opt/flume/lib

o cp –r /data/kafka/lib/* /opt/flume/lib

? docker stop <kfk_con_container_ID>

? docker start <kfk_con_container_ID>

o docker logs <kfk_con_container_ID>

o refer to “docker-compose kafka” about topic list and consumer command

配置kibana

如何構建基于容器的本機監控系統

Bigdesk插件輸出情況

如何構建基于容器的本機監控系統

logstash日志為12A配置的flume生成。

es-index日志為12B/C配置的flume生成(via kafka)

如何構建基于容器的本機監控系統

通過kibana接收到的基于時序的日志信息。

整個系統狀況

如何構建基于容器的本機監控系統

加粗文字

停止所有docker:

? docker ps | grep ^[0-9] | awk ‘{print $1}’ | xargs –t –I docker stop {}

啟動所有docker:

? docker ps -a| grep ^[0-9] | awk ‘{print $1}’ | xargs –t –I docker stop {}

 本文由用戶 jopen 自行上傳分享,僅供網友學習交流。所有權歸原作者,若您的權利被侵害,請聯系管理員。
 轉載本站原創文章,請注明出處,并保留原始鏈接、圖片水印。
 本站是一個以用戶分享為主的開源技術平臺,歡迎各類分享!