Ceph monitor故障恢复

查看ceph健康状态

[[email protected] ~]# ceph health

HEALTH_OK

[[email protected] ~]# ceph health detail

HEALTH_OK

[[email protected] ~]# ceph mon stat

e2: 3 mons at{bgw-os-node151=10.240.216.151:6789/0,bgw-os-node152=10.240.216.152:6789/0,bgw-os-node153=10.240.216.153:6789/0},election epoch 12, quorum 0,1,2 bgw-os-node151,bgw-os-node152,bgw-os-node153

故障一:Ceph mon进程异常退出且系统运行正常

故障错误信息

[[email protected] ~]# ceph health detail

HEALTH_WARN 1 mons down, quorum 0,1bgw-os-node151,bgw-os-node152

mon.bgw-os-node153 (rank 2) addr10.240.216.153:6789/0 is down (out of quorum)

解决办法

这类故障重启相应的mon进程即可恢复

[[email protected] ceph]# service ceph -c/etc/ceph/ceph.conf start mon.bgw-os-node153  

=== mon.bgw-os-node153 ===

Starting Ceph mon.bgw-os-node153 onbgw-os-node153...

Starting ceph-create-keys onbgw-os-node153...

[[email protected] ceph]# ps aux |grepmon

dbus     2215  0.0  0.0 21588  2448 ?        Ss  May08   0:00 dbus-daemon --system

root    18516  0.1  0.0 151508 15612 pts/0    Sl  14:57   0:00 /usr/bin/ceph-mon -ibgw-os-node153 --pid-file /var/run/ceph/mon.bgw-os-node153.pid -c/etc/ceph/ceph.conf --cluster ceph

root    18544  0.0  0.0 103308 2092 pts/0    S+   14:57  0:00 grep mon

[[email protected] ceph]# ceph healthdetail

HEALTH_OK

故障二:Ceph集群中有超过半数的mon进程挂掉

一般来说,在实际运行中,ceph monitor的个数是2n+1n>=0)个,在线上至少3个,只要正常的节点数>=n+1cephpaxos算法能保证系统的正常运行。所以,对于3个节点,同时只能挂掉一个。一般来说,同时挂掉2个节点的概率比较小,但是万一挂掉2个了呢?

如果cephmonitor节点超过半数挂掉,paxos算法就无法正常进行仲裁(quorum),此时,ceph集群会阻塞对集群的操作,直到超过半数的monitor节点恢复。http://ceph.com/docs/argonaut/ops/manage/failures/mon/

Ceph mon节点故障处理案例分解

1)情况一:挂掉的2个节点至少有一个可以恢复,也就是monitor的元数据还是ok的,那么只需要重启ceph mon进程即可(同上)。建议:monitor最好运行在raid的机器上,这样即使机器出故障,恢复也比较容易。

 

2)情况二:挂掉的2个节点的元数据都被损坏了,这应该怎么恢复呢?

首先看看故障前的仲裁状态信息:

[[email protected] ~]# ceph--cluster=cluster1 --admin-daemon /var/run/ceph/ceph-mon.bgw-os-node151.asokquorum_status

Ceph mon节点故障处理案例分解

再来看看故障后的仲裁状态信息:

[[email protected] ceph]# service ceph -c/etc/ceph/ceph.conf stop mon.bgw-os-node153

=== mon.bgw-os-node153 ===

Stopping Ceph mon.bgw-os-node153 onbgw-os-node153...kill 18516...done

[[email protected] ceph]# ps aux | grepmon

dbus     2215  0.0  0.0 21588  2448 ?        Ss  May08   0:00 dbus-daemon --system

root    18903  0.0  0.0 103308 2040 pts/0    S+   15:42  0:00 grep mon

[[email protected] ~]# service ceph -c/etc/ceph/ceph.conf stop mon.bgw-os-node152

=== mon.bgw-os-node152 ===

Stopping Ceph mon.bgw-os-node152 onbgw-os-node152...kill 23144...done

[[email protected] ~]# ps aux | grep mon

dbus     2968  0.0  0.0  21588  2408 ?       Ss   May08   0:00 dbus-daemon --system

root    13180  0.0  0.0 103308 2096 pts/0    S+   15:42  0:00 grep mon

[[email protected] ~]# ceph--cluster=cluster1 --admin-daemon /var/run/ceph/ceph-mon.bgw-os-node151.asokconfig show

[[email protected] ~]# ceph--cluster=cluster1 --admin-daemon /var/run/ceph/ceph-mon.bgw-os-node151.asokquorum_status

Ceph mon节点故障处理案例分解

此时通过网络访问ceph的所有操作都会被阻塞,但是在monitor本地的socket还是可以通信的。

[[email protected] ~]# telnet10.240.216.151 6789

Trying 10.240.216.151...

Connected to 10.240.216.151.

Escape character is '^]'.

ceph v027        

telnet> quit         <-- ctrl+], 然后输入quit

Connection closed.

解决办法:

添加monitor节点的方法:http://ceph.com/docs/argonaut/ops/manage/grow/mon/#adding-mon

$ ceph mon getmap -o /tmp/monmap           # provides fsid and existing monitor addrs
$ ceph auth export mon. -o /tmp/monkey     # mon. auth key
$ ceph-mon -i newname --mkfs --monmap /tmp/monmap --keyring /tmp/monkey

启动新monitor节点

$ ceph-mon -i newname --public-addr <ip:port>

可是:

[[email protected] ~]# ceph mon getmap -o/tmp/monmap

2015-06-01 15:54:20.7901717fc5145c4700  0 --10.240.216.151:0/1030319 >> 10.240.216.153:6789/0 pipe(0x7fc500000c00sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fc500000e70).fault

导出monmap报错了!!!呜呜~~~ 肿么办???

这里很重要,由于不能通过socketmonmap导出,所以需要借助monmaptool来完成

[[email protected] ~]# monmaptool --help

 usage: [--print] [--create [--clobber][--fsiduuid]] [--generate] [--set-initial-members] [--add name 1.2.3.4:567] [--rmname] <mapfilename>

首先在bgw-os-node152上手动生成monmap <注意:fsid可以从/etc/ceph/ceph.conf中找到>

[[email protected] ~]# monmaptool--create --clobber --fsid 00000000-0000-0000-0000-000000000001 --addbgw-os-node151 10.240.216.151:6789 --add bgw-os-node152 10.240.216.152:6789 monmap

monmaptool: monmap file monmap

monmaptool: set fsid to00000000-0000-0000-0000-000000000001

monmaptool: writing epoch 0 to monmap (2monitors)

然后将正常的monitor节点上的mon key拷贝过来:

[[email protected] ~]# cat/var/lib/ceph/mon/ceph-bgw-os-node151/keyring

[mon.]

       key = AQCMc01V0JvDDBAAbNpMnCTznrypm7d3qO2eyw==

       caps mon = "allow *"

[[email protected] ~]# vim /tmp/keyring         #添加

[mon.]

       key = AQCMc01V0JvDDBAAbNpMnCTznrypm7d3qO2eyw==

       caps mon = "allow *"

然后初始化新的mon节点:

[[email protected] ~]#  ceph-mon --cluster cluster1 -i bgw-os-node152--mkfs --monmap /root/monmap --keyring /tmp/keyring -c /etc/ceph/ceph.conf

ceph-mon: set fsid to00000000-0000-0000-0000-000000000001

ceph-mon: created monfs at/var/lib/ceph/mon/cluster1-bgw-os-node152 for mon.bgw-os-node152

最后启动故障节点:

[[email protected] ~]# ceph-mon --clustercluster1 -i bgw-os-node152 --public-addr 10.240.216.152:6789 -c /etc/ceph/ceph.conf

[[email protected] ~]# ps aux | grep mon

dbus     2968  0.0  0.0 21588  2408 ?        Ss  May08   0:00 dbus-daemon --system

root    14717  1.0  0.0 165792 30768 pts/0    Sl  16:33   0:00 ceph-mon --clustercluster1 -i bgw-os-node152 --public-addr 10.240.216.152:6789 -c/etc/ceph/ceph.conf

root    14732  0.0  0.0 103308 2092 pts/0    S+   16:33  0:00 grep mon

[[email protected] ~]# ceph status

   cluster 00000000-0000-0000-0000-000000000001

    health HEALTH_WARN 1 mons down, quorum 0,1 bgw-os-node151,bgw-os-node152

    monmap e2: 3 mons at{bgw-os-node151=10.240.216.151:6789/0,bgw-os-node152=10.240.216.152:6789/0,bgw-os-node153=10.240.216.153:6789/0},election epoch 22, quorum 0,1 bgw-os-node151,bgw-os-node152

    mdsmap e14: 1/1/1 up {0=bgw-os-node153=up:active}, 1 up:standby

    osdmap e98: 12 osds: 12 up, 12 in

     pgmap v12319: 384 pgs, 6 pools, 417 MB data, 81 objects

           13575 MB used, 3337 GB / 3350 GB avail

                384 active+clean

Ceph mon节点故障处理案例分解

数据正常:)

添加第三个节点:

[[email protected] ~]# monmaptool--create --clobber --fsid 00000000-0000-0000-0000-000000000001 --addbgw-os-node151 10.240.216.151:6789 --add bgw-os-node152 10.240.216.152:6789--add bgw-os-node153 10.240.216.153:6789 monmap

monmaptool: monmap file monmap

monmaptool: set fsid to00000000-0000-0000-0000-000000000001

monmaptool: writing epoch 0 to monmap (3monitors)

[[email protected] ~]# vim /tmp/keyring

[mon.] 

       key = AQCMc01V0JvDDBAAbNpMnCTznrypm7d3qO2eyw==

       caps mon = "allow *"

[[email protected] ~]# ceph-mon --clustercluster1 -i bgw-os-node153 --mkfs --monmap /root/monmap --keyring /tmp/keyring-c /etc/ceph/ceph.conf

ceph-mon: set fsid to00000000-0000-0000-0000-000000000001

ceph-mon: created monfs at/var/lib/ceph/mon/cluster1-bgw-os-node153 for mon.bgw-os-node153

[[email protected] ~]# ceph-mon --clustercluster1 -i bgw-os-node153 --public-addr 10.240.216.153:6789 -c/etc/ceph/ceph.conf

[[email protected] ~]# ps aux | grep mon

dbus     2215  0.0  0.0 21588  2448 ?        Ss  May08   0:00 dbus-daemon --system

root    22023  1.7  0.1 166580 34096 pts/0    Sl  17:26   0:00 ceph-mon --clustercluster1 -i bgw-os-node153 --public-addr 10.240.216.153:6789 -c/etc/ceph/ceph.conf

root    22040  0.0  0.0 103308 2032 pts/0    S+   17:26  0:00 grep mon

[[email protected] ~]# ceph status

   cluster 00000000-0000-0000-0000-000000000001

    health HEALTH_OK

    monmap e2: 3 mons at{bgw-os-node151=10.240.216.151:6789/0,bgw-os-node152=10.240.216.152:6789/0,bgw-os-node153=10.240.216.153:6789/0},election epoch 24, quorum 0,1,2 bgw-os-node151,bgw-os-node152,bgw-os-node153

    mdsmap e14: 1/1/1 up {0=bgw-os-node153=up:active}, 1 up:standby

    osdmap e98: 12 osds: 12 up, 12 in

     pgmap v12319: 384 pgs, 6 pools, 417 MB data, 81 objects

           13575 MB used, 3337 GB / 3350 GB avail

                 384 active+clean

Ceph mon节点故障处理案例分解

到此添加mon节点就完成啦!