十分鐘搞定SUSE Linux Enterprise Server 11 SP3上搭建ceph集群
環境簡介:
一個mon節點,一個mds節點,三個osd節點,分別如下
192.168.239.131 ceph-mon
192.168.239.132 ceph-mds
192.168.239.160 ceph-osd0
192.168.239.161 ceph-osd1
192.168.239.162 ceph-osd2
1、從suse.com官網注冊一個賬號,下載SLES 11 SP3和SUSE Cloud 4的ISO
2、給每個節點安裝系統,然后設置兩個安裝源,一個OS,一個SUSE Cloud 4
3、配置ceph-mon到其他節點的root用戶無密碼登錄ssh
4、復制ceph-mon節點的/etc/hosts到其他節點
5、安裝ceph
zypper -n install ceph ceph-radosgw
6、在ceph-mon節點上,使用setup.sh分別調用init-mon.sh , init-osd.sh , init-mds.sh自動配置mon,osd,mds。
setup.sh和init-mon.sh會進入當前目錄下的./ceph文件夾,請務必在/etc以外的目錄執行。
各個腳本的代碼如下(僅供參考):
(1) setup.sh
#!/bin/bashStop all existed OSD nodes
printf "Killing all ceph-osd nodes..." for i in 0 1 2;do ssh ceph-osd$i "killall -TERM ceph-osd" sleep 1 done printf "Done\n"
Initialize mon on this system
killall -TERM ceph-mon printf "Initializing ceph-mon on current node..." ./init-mon.sh cd ./ceph printf "Done\n"
Initialize osd services on nodes
for i in 0 1 2;do ../init-osd.sh ceph-osd$i $i sleep 1 done
Initialize mds on remote node
printf "Initializing mds on ceph-mds..." ../init-mds.sh ceph-mds printf "Done\n"</pre>
(2) init-mon.sh
#!/bin/bashfsid=$(uuidgen) mon_node=$(hostname) mon_ip=192.168.239.131 cluster_net=192.168.239.0/24 public_net=192.168.1.0/24 mon_data=/data/$mon_node
killall -TERM ceph-mon
rm -f /etc/ceph/ceph.conf /etc/ceph/.keyring rm -f /var/lib/ceph/bootstrap-mds/ /var/lib/ceph/bootstrap-osd/ rm -f /var/log/ceph/.log
confdir=./ceph rm -fr $confdir mkdir -p $confdir cd $confdir
rm -fr $mon_data mkdir -p $mon_data
cat > ceph.conf << EOF [global] fsid = $fsid mon initial members = $mon_node mon host = $mon_ip public network = $public_net cluster network = $cluster_net auth cluster required = cephx auth service required = cephx auth client required = cephx osd journal size = 1024 filestore xattr use omap = true EOF
ceph-authtool --create-keyring bootstrap-osd.keyring --gen-key -n client.bootstrap-osd ceph-authtool --create-keyring bootstrap-mds.keyring --gen-key -n client.bootstrap-mds
ceph-authtool --create-keyring ceph.mon.keyring --gen-key -n mon. --cap mon 'allow ' ceph-authtool --create-keyring ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow ' --cap osd 'allow *' --cap mds 'allow' ceph-authtool ceph.mon.keyring --import-keyring ceph.client.admin.keyring
monmaptool --create --add $mon_node $mon_ip --fsid $(grep fsid ceph.conf | awk '{ print $NF}') monmap
cp -a ceph.conf /etc/ceph cp -a ceph.client.admin.keyring /etc/ceph
Make filesystem for ceph-mon
ceph-mon --mkfs -i $mon_node --monmap monmap --keyring ceph.mon.keyring --mon-data $mon_data
Start the ceph-mon service
ceph-mon -i $mon_node --mon-data $mon_data
Initialize bootstrap keyrings
ceph auth add client.bootstrap-mds mon 'allow profile bootstrap-mds' -i bootstrap-mds.keyring ceph auth add client.bootstrap-osd mon 'allow profile bootstrap-osd' -i bootstrap-osd.keyring</pre>
(3) init-osd.sh
#!/bin/bashif [ $# -lt 2 ];then printf "Usage:$0 {host} {osd num}\n" $0 exit 1 fi
host=$1 osd_num=$2
ssh $host "killall -TERM ceph-osd" ssh $host "rm -f /var/lib/ceph/bootstrap-osd/keyring" ssh $host "rm -fr /data/osd.$osd_num/"
ssh $host "mkdir -p /var/lib/ceph/bootstrap-osd" ssh $host "mkdir -p /data/osd.$osd_num"
scp ceph.conf ceph.client.admin.keyring $host:/etc/ceph scp bootstrap-osd.keyring $host:/var/lib/ceph/bootstrap-osd/ceph.keyring
ssh $host "ceph osd create" ssh $host "ceph-osd -i $osd_num --osd-data /data/osd.$osd_num --osd-journal /data/osd.$osd_num/journal --mkfs --mkkey" ssh $host "ceph auth add osd.$osd_num osd 'allow *' mon 'allow profile osd' -i /data/osd.$osd_num/keyring" ssh $host "ceph osd crush add-bucket $host host" ssh $host "ceph osd crush move $host root=default" ssh $host "ceph osd crush add osd.$osd_num 1.0 host=$host" ssh $host "ceph-osd -i $osd_num --osd-data /data/osd.$osd_num --osd-journal /data/osd.$osd_num/journal"</pre>
(4) init-mds.sh
#!/bin/bashif [ $# -lt 1 ];then printf "Usage:$0 {host}}\n" $0 exit 1 fi
mds_host=$1 mds_name=mds.$mds_host mds_data=/data/$mds_name keyfile=ceph.$mds_host.keyring mon_host=ceph-mon:6789
Stop current running mds daemons first
ssh $mds_host "killall -TERM ceph-mds" ssh $mds_host "rm -f $mds_data/*" ssh $mds_host "mkdir -p $mds_data"
Clean the old keyring file first
rm -f $keyfile
Create new keyring file
ceph-authtool -C -g -n $mds_name $keyfile ceph auth add $mds_name mon 'allow profile mds' osd 'allow rwx' mds 'allow' -i $keyfile
scp \ /etc/ceph/ceph.conf \ /etc/ceph/ceph.client.admin.keyring $mds_host:/etc/ceph scp $keyfile $mds_host:$mds_data/keyring
ssh $mds_host "ceph-mds -i $mds_host -n $mds_name -m $mon_host --mds-data=/data/mds.$mds_host"</pre>
腳本執行完之后會自動把服務啟動,在ceph-mon節點上查看ceph集群狀態:
ceph-mon:~ # ceph -s cluster 266900a9-b1bb-4b1f-9bd0-c509578aa9c9 health HEALTH_OK monmap e1: 1 mons at {ceph-mon=192.168.239.131:6789/0}, election epoch 2, quorum 0 ceph-mon mdsmap e4: 1/1/1 up {0=ceph-mds=up:active} osdmap e17: 3 osds: 3 up, 3 in pgmap v23: 192 pgs, 3 pools, 1884 bytes data, 20 objects 3180 MB used, 45899 MB / 49080 MB avail 192 active+cleanosd狀態:
ceph-mon:~ # ceph osd treeid weight type name up/down reweight
-1 3 root default -2 1 host ceph-osd0 0 1 osd.0 up 1 -3 1 host ceph-osd1 1 1 osd.1 up 1 -4 1 host ceph-osd2 2 1 osd.2 up 1</pre>
在ceph-mon節點上查看進程:
ceph-mon:~ # ps ax |grep ceph-mon 8993 pts/0 Sl 0:00 ceph-mon -i ceph-mon --mon-data /data/ceph-mon在ceph-osdX節點上查看進程:
ceph-osd0:~ # ps ax | grep ceph-osd 13140 ? Ssl 0:02 ceph-osd -i 0 --osd-data /data/osd.0 --osd-journal /data/osd.0/journal在ceph-mds節點上查看進程:
ceph-mds:~ # ps ax |grep ceph-mds 42260 ? Ssl 0:00 ceph-mds -i ceph-mds -n mds.ceph-mds -m ceph-mon:6789 --mds-data=/data/mds.ceph-mds
7、掛載cephfs的兩種方式
(1) mount.ceph
由于SLES 11系列的內核還不支持ceph模塊,您需要在客戶端安裝較高版本的內核才能獲得mount.ceph的功能。mount.ceph命令用法如下:
mount.ceph {mon ip/host}:/ {mount point} -o name=admin,secret={your keyring}
mount.ceph ceph-mon:/ /mnt/cephfs -v -o name=admin,secret=AQD5jp5UqPRtCRAAvpRyhlNI0+qEHjZYqEZw8A==secret指定的密鑰從/etc/ceph/ceph.client.admin.keyring文件獲得,出于安全考慮,為了避免講密鑰暴露在命令歷史中,請使用secretfile=指定包含密鑰的文件。
(2) ceph-fuse
沒有內核ceph模塊的情況下可以用ceph-fuse來掛載文件系統,用法如下:
ceph-fuse ceph-mon:6789 /mnt/cephfs
8、查看掛載狀態:
ceph-mon:/etc/ceph # df -Ph 文件系統 容量 已用 可用 已用% 掛載點 /dev/mapper/rootvg-root 12G 5.3G 5.7G 49% / udev 12G 5.3G 5.7G 49% /dev tmpfs 12G 5.3G 5.7G 49% /dev/shm /dev/sda1 185M 36M 141M 21% /boot /dev/sdb1 16G 35M 16G 1% /data 192.168.239.131:/ 48G 3.2G 45G 7% /mnt/cephfs來自:http://my.oschina.net/u/2244328/blog/361370