使用consul實現分布式服務注冊和發現
Consul是HashiCorp公司推出的開源工具,用于實現分布式系統的服務發現與配置。與其他分布式服務注冊與發現的方案,比如 Airbnb的SmartStack等相比,Consul的方案更“一站式”,內置了服務注冊與發現框 架、分布一致性協議實現、健康檢查、Key/Value存儲、多數據中心方案,不再需要依賴其他工具(比如ZooKeeper等)。使用起來也較 為簡單。Consul用Golang實現,因此具有天然可移植性(支持Linux、windows和Mac OS X);安裝包僅包含一個可執行文件,方便部署,與Docker等輕量級容器可無縫配合。
本文是Consul的入門介紹,并用一些例子說明如何使用Consul實現服務的注冊和發現。
一、建立Consul Cluster
要想利用Consul提供的服務實現服務的注冊與發現,我們需要建立Consul Cluster。在Consul方案中,每個提供服務的節點上都要部署和運行Consul的agent,所有運行Consul agent節點的集合構成Consul Cluster。Consul agent有兩種運行模式:Server和Client。這里的Server和Client只是Consul集群層面的區分,與搭建在Cluster之上 的應用服務無關。以Server模式運行的Consul agent節點用于維護Consul集群的狀態,官方建議每個Consul Cluster至少有3個或以上的運行在Server mode的Agent,Client節點不限。
每個數據中心的Consul Cluster都會在運行于server模式下的agent節點中選出一個Leader節點,這個選舉過程通過Consul實現的raft協議保證,多個 server節點上的Consul數據信息是強一致的。處于client mode的Consul agent節點比較簡單,無狀態,僅僅負責將請求轉發給Server agent節點。
下面我們就來搭建一個實驗Consul Cluster。
實驗環境和節點角色如下:
n1(Ubuntu 14.04 x86_64): 10.10.105.71 server mode
n2(Ubuntu 12.04 x86_64): 10.10.126.101 server mode with Consul Web UI
n3(Ubuntu 9.04 i386): 10.10.126.187 client mode
在三臺主機上分別下載和安裝Consul包,安裝包很簡單,只是包含一個可執行文件consul。在n2主機上還要下載一份Consul Web UI包,支持圖形化展示Consul cluster中的節點狀態和服務狀態。
Consul Cluster的啟動過程如下:
n1主機:
$ consul agent -server -bootstrap-expect 2 -data-dir /tmp/consul -node=n1 -bind=10.10.105.71 -dc=dc1
==> WARNING: Expect Mode enabled, expecting 2 servers
==> WARNING: It is highly recommended to set GOMAXPROCS higher than 1
==> Starting Consul agent…
==> Starting Consul agent RPC…
==> Consul agent running!
Node name: 'n1'
Datacenter: 'dc1'
Server: true (bootstrap: false)
Client Addr: 127.0.0.1 (HTTP: 8500, HTTPS: -1, DNS: 8600, RPC: 8400)
Cluster Addr: 10.10.105.71 (LAN: 8301, WAN: 8302)
Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
Atlas:
==> Log data will now stream in as it occurs:
2015/07/03 09:18:25 [INFO] serf: EventMemberJoin: n1 10.10.105.71
2015/07/03 09:18:25 [INFO] serf: EventMemberJoin: n1.dc1 10.10.105.71
2015/07/03 09:18:25 [INFO] raft: Node at 10.10.105.71:8300 [Follower] entering Follower state
2015/07/03 09:18:25 [INFO] consul: adding server n1 (Addr: 10.10.105.71:8300) (DC: dc1)
2015/07/03 09:18:25 [INFO] consul: adding server n1.dc1 (Addr: 10.10.105.71:8300) (DC: dc1)
2015/07/03 09:18:25 [ERR] agent: failed to sync remote state: No cluster leader
2015/07/03 09:18:26 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.1
n2主機:
$ consul agent -server -bootstrap-expect 2 -data-dir /tmp/consul -node=n2 -bind=10.10.126.101 -ui-dir ./dist -dc=dc1
==> WARNING: Expect Mode enabled, expecting 2 servers
==> WARNING: It is highly recommended to set GOMAXPROCS higher than 1
==> Starting Consul agent…
==> Starting Consul agent RPC…
==> Consul agent running!
Node name: 'n2'
Datacenter: 'dc1'
Server: true (bootstrap: false)
Client Addr: 127.0.0.1 (HTTP: 8500, HTTPS: -1, DNS: 8600, RPC: 8400)
Cluster Addr: 10.10.126.101 (LAN: 8301, WAN: 8302)
Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
Atlas:
==> Log data will now stream in as it occurs:
2015/07/03 11:30:32 [INFO] serf: EventMemberJoin: n2 10.10.126.101
2015/07/03 11:30:32 [INFO] serf: EventMemberJoin: n2.dc1 10.10.126.101
2015/07/03 11:30:32 [INFO] raft: Node at 10.10.126.101:8300 [Follower] entering Follower state
2015/07/03 11:30:32 [INFO] consul: adding server n2 (Addr: 10.10.126.101:8300) (DC: dc1)
2015/07/03 11:30:32 [INFO] consul: adding server n2.dc1 (Addr: 10.10.126.101:8300) (DC: dc1)
2015/07/03 11:30:32 [ERR] agent: failed to sync remote state: No cluster leader
2015/07/03 11:30:33 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
從兩個server agent的啟動日志可以看出,n1、n2啟動后并不知道集群其他節點的存在。以n1為例,通過consul members和consul info查看當前agent狀態:
$ consul members
Node Address Status Type Build Protocol DC
n1 10.10.105.71:8301 alive server 0.5.2 2 dc1
$ consul info
… …
consul:
bootstrap = false
known_datacenters = 1
leader = false
server = true
raft:
applied_index = 0
commit_index = 0
fsm_pending = 0
last_contact = never
last_log_index = 0
last_log_term = 0
last_snapshot_index = 0
last_snapshot_term = 0
num_peers = 0
state = Follower
term = 0
… …
可以看出,n1上的agent當前狀態是Follower,bootstrap = false;n2同樣也是這個情況。整個Cluster并未完成Bootstrap過程。
我們用consul join命令觸發Cluster bootstrap過程,我們在n1上執行如下命令:
$ consul join 10.10.126.101
Successfully joined cluster by contacting 1 nodes.
我們通過consul join子命令將當前節點加入包含成員10.10.126.101(也就是n2)的集群中去。命令執行結果通過n1和n2的日志可以觀察到:
n1主機:
2015/07/03 09:29:48 [INFO] agent: (LAN) joining: [10.10.126.101]
2015/07/03 09:29:48 [INFO] serf: EventMemberJoin: n2 10.10.126.101
2015/07/03 09:29:48 [INFO] agent: (LAN) joined: 1 Err:
2015/07/03 09:29:48 [INFO] consul: adding server n2 (Addr: 10.10.126.101:8300) (DC: dc1)
2015/07/03 09:29:48 [INFO] consul: Attempting bootstrap with nodes: [10.10.126.101:8300 10.10.105.71:8300]
2015/07/03 09:29:49 [INFO] consul: New leader elected: n2
2015/07/03 09:29:50 [INFO] agent: Synced service 'consul'
n2主機:
2015/07/03 11:40:53 [INFO] serf: EventMemberJoin: n1 10.10.105.71
2015/07/03 11:40:53 [INFO] consul: adding server n1 (Addr: 10.10.105.71:8300) (DC: dc1)
2015/07/03 11:40:53 [INFO] consul: Attempting bootstrap with nodes: [10.10.126.101:8300 10.10.105.71:8300]
2015/07/03 11:40:54 [WARN] raft: Heartbeat timeout reached, starting election
2015/07/03 11:40:54 [INFO] raft: Node at 10.10.126.101:8300 [Candidate] entering Candidate state
2015/07/03 11:40:54 [INFO] raft: Election won. Tally: 2
2015/07/03 11:40:54 [INFO] raft: Node at 10.10.126.101:8300 [Leader] entering Leader state
2015/07/03 11:40:54 [INFO] consul: cluster leadership acquired
2015/07/03 11:40:54 [INFO] consul: New leader elected: n2
2015/07/03 11:40:54 [INFO] raft: pipelining replication to peer 10.10.105.71:8300
2015/07/03 11:40:54 [INFO] consul: member 'n2' joined, marking health alive
2015/07/03 11:40:54 [INFO] consul: member 'n1' joined, marking health alive
2015/07/03 11:40:55 [INFO] agent: Synced service 'consul'
join后,兩臺主機互相知道了對方,并進行了leader election過程,n2被選舉為Leader。
在n2主機上通過consul info確認一下n2 agent的狀態:
$consul info
… …
consul:
bootstrap = false
known_datacenters = 1
leader = true
server = true
raft:
applied_index = 10
commit_index = 10
fsm_pending = 0
last_contact = never
last_log_index = 10
last_log_term = 1
last_snapshot_index = 0
last_snapshot_term = 0
num_peers = 1
state = Leader
term = 1
… …
$ consul members
Node Address Status Type Build Protocol DC
n2 10.10.126.101:8301 alive server 0.5.2 2 dc1
n1 10.10.105.71:8301 alive server 0.5.2 2 dc1
可以看到n2的state已經為Leader了,n1的state依舊是Follower。
到這里,n1和n2就成為了dc1這個數據中心Consul Cluster的兩個節點,而且是用來維護集群狀態的Server node。n2被選舉為Leader,n1是Folllower。
如果作為Leader的n2退出集群,我們來看看集群狀態會發生怎樣變化。在n2上,我們通過consul leave命令告訴n2上的agent離開集群并退出:
$ consul leave
Graceful leave complete
n2上Agent的日志:
2015/07/03 14:04:40 [INFO] agent.rpc: Accepted client: 127.0.0.1:35853
2015/07/03 14:04:40 [INFO] agent.rpc: Graceful leave triggered
2015/07/03 14:04:40 [INFO] consul: server starting leave
2015/07/03 14:04:40 [INFO] raft: Removed peer 10.10.105.71:8300, stopping replication (Index: 7)
2015/07/03 14:04:40 [INFO] raft: Removed ourself, transitioning to follower
2015/07/03 14:04:40 [INFO] raft: Node at 10.10.126.101:8300 [Follower] entering Follower state
2015/07/03 14:04:40 [INFO] serf: EventMemberLeave: n2.dc1 10.10.126.101
2015/07/03 14:04:40 [INFO] consul: cluster leadership lost
2015/07/03 14:04:40 [INFO] raft: aborting pipeline replication to peer 10.10.105.71:8300
2015/07/03 14:04:40 [INFO] consul: removing server n2.dc1 (Addr: 10.10.126.101:8300) (DC: dc1)
2015/07/03 14:04:41 [INFO] serf: EventMemberLeave: n2 10.10.126.101
2015/07/03 14:04:41 [INFO] consul: removing server n2 (Addr: 10.10.126.101:8300) (DC: dc1)
2015/07/03 14:04:41 [INFO] agent: requesting shutdown
2015/07/03 14:04:41 [INFO] consul: shutting down server
2015/07/03 14:04:42 [INFO] agent: shutdown complete
n1上的日志:
2015/07/03 11:53:36 [INFO] serf: EventMemberLeave: n2 10.10.126.101
2015/07/03 11:53:36 [INFO] consul: removing server n2 (Addr: 10.10.126.101:8300) (DC: dc1)
2015/07/03 11:55:15 [ERR] agent: failed to sync remote state: No cluster leader
這個時候我們在n1上通過consul info查看,n1的狀態依舊是Follower,也就是說在雙server節點的集群下,一個server退出,將產生無Leader狀態。在三 server節點集群里,Leader退出,其余兩個會再協商選出一個新Leader,但一旦再退出一個節點,同樣集群就不會再有Leader了。 當然,如果是單節點bootstrap的集群( -bootstrap-expect 1 ),集群只有一個server節點,那這個server節點自然當選Leader。
現在我們在n1上通過consul members查看集群狀態:
$ consul members
Node Address Status Type Build Protocol DC
n1 10.10.105.71:8301 alive server 0.5.2 2 dc1
n2 10.10.126.101:8301 left server 0.5.2 2 dc1
執行結果顯示:n2是Left狀態。我們重新啟動n2,再來看看集群的狀態變化。
$ consul agent -server -bootstrap-expect 2 -data-dir /tmp/consul -node=n2 -bind=10.10.126.101 -ui-dir ./dist -dc=dc1
… …
==> Log data will now stream in as it occurs:
2015/07/03 14:13:46 [INFO] serf: EventMemberJoin: n2 10.10.126.101
2015/07/03 14:13:46 [INFO] raft: Node at 10.10.126.101:8300 [Follower] entering Follower state
2015/07/03 14:13:46 [INFO] consul: adding server n2 (Addr: 10.10.126.101:8300) (DC: dc1)
2015/07/03 14:13:46 [INFO] serf: EventMemberJoin: n2.dc1 10.10.126.101
2015/07/03 14:13:46 [INFO] consul: adding server n2.dc1 (Addr: 10.10.126.101:8300) (DC: dc1)
2015/07/03 14:13:46 [ERR] agent: failed to sync remote state: No cluster leader
2015/07/03 14:13:48 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
… …
n2啟動后,并未自動加入之前的cluster,而是依舊如第一次啟動那樣,看不到peers,孤立運行。
我們再來在n1上join一下:consul join 10.10.126.101
n1的日志變為:
2015/07/03 12:04:55 [INFO] consul: adding server n2 (Addr: 10.10.126.101:8300) (DC: dc1)
2015/07/03 12:04:56 [ERR] agent: failed to sync remote state: No cluster leader
n2的日志變為:
2015/07/03 14:16:00 [INFO] serf: EventMemberJoin: n1 10.10.105.71
2015/07/03 14:16:00 [INFO] consul: adding server n1 (Addr: 10.10.105.71:8300) (DC: dc1)
2015/07/03 14:16:00 [INFO] consul: New leader elected: n2
2015/07/03 14:16:01 [ERR] agent: failed to sync remote state: No cluster leader
n1和n2無法再選出Leader,通過info命令看,兩個節點都變成了Follower,集群仍然處于無Leader狀態。
這個問題在consul的github repositroy issues中被多人多次提及,但作者似乎不將此作為bug。產生這個問題的原因是當n2退出時,consul會將/tmp/consul/raft /peers.json的內容由:
["10.10.105.71:8300", "10.10.126.101:8300"]
改為
null
n2重啟后,該文件并未改變,依舊為null,n2啟動就不會重新自動join到n1的cluster中。
關于這個問題的cluster恢復方法,官方在Outage Recovery一文中有明確說明。我們來測試一下:
我們打開n1和n2的/tmp/consul/raft/peers.json,將其內容統一修改為:
["10.10.126.101:8300","10.10.105.71:8300"]
然后重啟n2,但加上-rejoin命令:
$ consul agent -server -bootstrap-expect 2 -data-dir /tmp/consul -node=n2 -bind=10.10.126.101 -ui-dir ./dist -dc=dc1 -rejoin
…. …
2015/07/03 14:56:02 [WARN] raft: Election timeout reached, restarting election
2015/07/03 14:56:02 [INFO] raft: Node at 10.10.126.101:8300 [Candidate] entering Candidate state
2015/07/03 14:56:02 [INFO] raft: Election won. Tally: 2
2015/07/03 14:56:02 [INFO] raft: Node at 10.10.126.101:8300 [Leader] entering Leader state
2015/07/03 14:56:02 [INFO] consul: cluster leadership acquired
2015/07/03 14:56:02 [INFO] consul: New leader elected: n2
…….
n1上的日志:
2015/07/03 12:44:52 [INFO] serf: EventMemberJoin: n2 10.10.126.101
2015/07/03 12:44:52 [INFO] consul: adding server n2 (Addr: 10.10.126.101:8300) (DC: dc1)
2015/07/03 12:44:54 [INFO] consul: New leader elected: n2
2015/07/03 12:44:55 [WARN] raft: Rejecting vote from 10.10.126.101:8300 since we have a leader: 10.10.126.101:8300
2015/07/03 12:44:56 [WARN] raft: Heartbeat timeout reached, starting election
2015/07/03 12:44:56 [INFO] raft: Node at 10.10.105.71:8300 [Candidate] entering Candidate state
2015/07/03 12:44:56 [ERR] raft: Failed to make RequestVote RPC to 10.10.126.101:8300: EOF
2015/07/03 12:44:57 [INFO] raft: Node at 10.10.105.71:8300 [Follower] entering Follower state
2015/07/03 12:44:57 [INFO] consul: New leader elected: n2
這回集群的Leader重新選舉成功,集群狀態恢復。
接下來我們啟動n3上的client mode agent:
$ consul agent -data-dir /tmp/consul -node=n3 -bind=10.10.126.187 -dc=dc1
==> WARNING: It is highly recommended to set GOMAXPROCS higher than 1
==> Starting Consul agent…
==> Starting Consul agent RPC…
==> Consul agent running!
Node name: 'n3'
Datacenter: 'dc1'
Server: false (bootstrap: false)
Client Addr: 127.0.0.1 (HTTP: 8500, HTTPS: -1, DNS: 8600, RPC: 8400)
Cluster Addr: 10.10.126.187 (LAN: 8301, WAN: 8302)
Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
Atlas:
==> Log data will now stream in as it occurs:
2015/07/03 14:55:17 [INFO] serf: EventMemberJoin: n3 10.10.126.187
2015/07/03 14:55:17 [ERR] agent: failed to sync remote state: No known Consul servers
在n3上join n1后,n3的日志輸出如下:
2015/07/03 14:59:31 [INFO] agent: (LAN) joining: [10.10.105.71]
2015/07/03 14:59:31 [INFO] serf: EventMemberJoin: n2 10.10.126.101
2015/07/03 14:59:31 [INFO] serf: EventMemberJoin: n1 10.10.105.71
2015/07/03 14:59:31 [INFO] agent: (LAN) joined: 1 Err:
2015/07/03 14:59:31 [INFO] consul: adding server n2 (Addr: 10.10.126.101:8300) (DC: dc1)
2015/07/03 14:59:31 [INFO] consul: adding server n1 (Addr: 10.10.105.71:8300) (DC: dc1)
n3上consul members可以查看到如下內容:
$ consul members
Node Address Status Type Build Protocol DC
n1 10.10.105.71:8301 alive server 0.5.2 2 dc1
n3 10.10.126.187:8301 alive client 0.5.2 2 dc1
n2 10.10.126.101:8301 alive server 0.5.2 2 dc1
處于client mode的agent可以自由退出和啟動,不會出現server mode下agent的問題。
二、服務注冊與發現
我們建立Consul Cluster是為了實現服務的注冊和發現。Consul支持兩種服務注冊的方式,一種是通過Consul的服務注冊HTTP API,由服務自身在啟動后調用API注冊自己,另外一種則是通過在配置文件中定義服務的方式進行注冊。Consul文檔中建議使用后面一種方式來做服務 配置和服務注冊。
我們還是用例子來說明一下如何做服務配置。前面我們已經建立了Consul Cluster,Cluster里包含了三個Node:兩個Server mode node,一個Client mode Node。我們計劃在n2、n3上部署一類服務web3,于是我們需要分別在n2、n3上增加Consul agent的配置文件。
Consul agent在啟動時可以通過-config-dir來指定配置文件所在目錄,比如以n3為例,我們可以如此啟動n3:
consul agent -data-dir /tmp/consul -node=n3 -bind=10.10.126.187 -dc=dc1 -config-dir=./conf
這樣在./conf下的所有文件擴展為.json的文件都會被Consul agent作為配置文件讀取。
我們以n3為例,我們在n3的consul agent的配置文件目錄下創建web3.json文件:
//web3.json
{
"service": {
"name": "web3",
"tags": ["master"],
"address": "127.0.0.1",
"port": 10000,
"checks": [
{
"http": "http://localhost:10000/health",
"interval": "10s"
}
]
}
}
這個配置就是我們在n3節點上為web3這個服務做的服務定義,定義中包含服務的name、address、port等,還包含一個服務檢測的配 置,這里 我們每隔10s對服務進行一次健康檢查,這要求服務增加對/health的處理邏輯。同理,我們在n2上也建立同樣配置文件(n2需重啟,并帶上 -config-dir命令行選項),服務注冊就這么簡單。
在重啟后的n2、n3日志中,我們能發現如下的錯誤內容:
2015/07/06 13:48:11 [WARN] agent: http request failed 'http://localhost:10000/health' : Get http://localhost:10000/health: dial tcp 127.0.0.1:10000: connect failed"
這就是agent對定義的服務的check日志。為了避免這個錯誤日志刷屏,我們在n2、n3上各部署一個web3服務實例。以n3上的web3為例,其源碼如下:
//web3.go
package main
import (
"fmt"
"net/http"
)
func handler(w http.ResponseWriter, r *http.Request) {
fmt.Println("hello Web3! This is n3")
fmt.Fprintf(w, "Hello Web3! This is n3")
}
func healthHandler(w http.ResponseWriter, r *http.Request) {
fmt.Println("health check!")
}
func main() {
http.HandleFunc("/", handler)
http.HandleFunc("/health", healthHandler)
http.ListenAndServe(":10000", nil)
}
一旦n2、n3上的web3服務實例啟動,我們就可以嘗試發現這些服務了。
Consul提供了兩種發現服務的方式,一種是通過HTTP API查看存在哪些服務;另外一種是通過consul agent內置的DNS服務來做。兩者的差別在于后者可以根據服務check的實時狀態動態調整available服務節點列表。我們這里也著重說明適用 DNS方式進行服務發現的具體步驟。
在配置和部署完web3服務后,我們就可以通過DNS命令來查詢服務的具體信息了。consul為服務編排的內置域名為 “NAME.service.consul",這樣我們的web3的域名為:web3.service.consul。我們在n1通過dig工具來查看一 下,注意是在n1上,n1上并未定義和部署web3服務,但集群中服務的信息已經被同步到n1上了,信息是一致的:
$ dig @127.0.0.1 -p 8600 web3.service.consul SRV
; <<>> DiG 9.9.5-3-Ubuntu <<>> @127.0.0.1 -p 8600 web3.service.consul SRV
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 6713
;; flags: qr aa rd; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 2
;; WARNING: recursion requested but not available
;; QUESTION SECTION:
;web3.service.consul. IN SRV
;; ANSWER SECTION:
web3.service.consul. 0 IN SRV 1 1 10000 n2.node.dc1.consul.
web3.service.consul. 0 IN SRV 1 1 10000 n3.node.dc1.consul.
;; ADDITIONAL SECTION:
n2.node.dc1.consul. 0 IN A 127.0.0.1
n3.node.dc1.consul. 0 IN A 127.0.0.1
;; Query time: 2 msec
;; SERVER: 127.0.0.1#8600(127.0.0.1)
;; WHEN: Mon Jul 06 12:12:53 CST 2015
;; MSG SIZE rcvd: 219
可以看到在ANSWER SECTION中,我們得到了兩個結果:n2和n3上各有一個web3的服務。在dig命令中我們用了SRV標志,那是因為我們需要的服務信息不僅有ip地址,還需要有端口號。
現在我們停掉n2上的web3服務,10s后,我們再來查一下:
$ dig @127.0.0.1 -p 8600 web3.service.consul SRV
; <<>> DiG 9.9.5-3-Ubuntu <<>> @127.0.0.1 -p 8600 web3.service.consul SRV
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 25136
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; QUESTION SECTION:
;web3.service.consul. IN SRV
;; ANSWER SECTION:
web3.service.consul. 0 IN SRV 1 1 10000 n3.node.dc1.consul.
;; ADDITIONAL SECTION:
n3.node.dc1.consul. 0 IN A 127.0.0.1
;; Query time: 3 msec
;; SERVER: 127.0.0.1#8600(127.0.0.1)
;; WHEN: Mon Jul 06 12:16:39 CST 2015
;; MSG SIZE rcvd: 128
結果顯示,只有n3上這一個web3服務可用了。通過下面Consul Agent日志:
dns: node 'n2' failing health check 'service web3' check', dropping from service 'web3'
我們可以看到consul agent將health check失敗的web3從結果列表中剔除了,這樣web3服務的客戶端在服務發現過程中就只能獲取到當前可用的web3服務節點了,這個好處是在實際應 用中大大降低了客戶端實現”服務發現“時的難度。另外consul agent DNS在返回查詢結果時也支持DNS Server常見的策略,至少是支持輪詢。你可以多次執行dig命令,可以看到n2和n3的排列順序是不同的。還有一點值得注意的是:由于考慮DNS cache對consul agent查詢結果的影響,默認情況下所有由consul agent返回的結果TTL值均設為0,也就是說不支持dns結果緩存。
接下來,我們使用golang實現一個demo級別的服務發現的客戶端,這里會用到第三方dns client庫"github.com/miekg/dns"。
// servicediscovery.go
package main
import (
"fmt"
"log"
"github.com/miekg/dns"
)
const (
srvName = "web3.service.consul"
agentAddr = "127.0.0.1:8600"
)
func main() {
c := new(dns.Client)
m := new(dns.Msg)
m.SetQuestion(dns.Fqdn(srvName), dns.TypeSRV)
m.RecursionDesired = true
r, _, err := c.Exchange(m, agentAddr)
if r == nil {
log.Fatalf("dns query error: %s\n", err.Error())
}
if r.Rcode != dns.RcodeSuccess {
log.Fatalf("dns query error: %v\n", r.Rcode)
}
for _, a := range r.Answer {
b, ok := a.(*dns.SRV)
if ok {
m.SetQuestion(dns.Fqdn(b.Target), dns.TypeA)
r1, _, err := c.Exchange(m, agentAddr)
if r1 == nil {
log.Fatalf("dns query error: %v, %v\n", r1.Rcode, err)
}
for _, a1 := range r1.Answer {
c, ok := a1.(*dns.A)
if ok {
fmt.Printf("%s – %s:%d\n", b.Target, c.A, b.Port)
}
}
}
}
}
我們執行該程序:
$ go run servicediscovery.go
n2.node.dc1.consul. – 10.10.126.101:10000
n3.node.dc1.consul. – 10.10.126.187:10000
注意各個node上的服務check是由其node上的agent上進行的,一旦那個node上的agent出現問題,則位于那個node上的所有 service也將會被置為unavailable狀態。比如我們停掉n3上的agent,那么我們在進行web3服務節點查詢時,就只能獲取到n2這一 個節點上有可用的web3服務了。
在真實的程序中,我們可以像上面demo中那樣,每Request都做一次DNS查詢,不過這樣的代價也很高。稍復雜些,我們可以結合dns結果本地緩存+定期查詢+每遇到Failed查詢的方式來綜合考量服務的發現方法或利用Consul提供的watch命令等。
以上僅僅是Consul的一個入門。真實場景中,理想的方案需要考慮的事情還有很多。Consul自身目前演進到0.5.2版本,還有不完善之處, 但它已 經被很多公司用于production環境。Consul不是孤立的,要充分發揮出Consul的優勢,在真實方案中,我們還要考慮與 Docker,HAProxy,Mesos等工具的結合。
? 2015, bigwhite. 版權所有.