首页 > 代码库 > DRBD+Pacemaker实现DRBD主从角色的自动切换
DRBD+Pacemaker实现DRBD主从角色的自动切换
DRBD+Pacemaker实现DRBD主从角色的自动切换
前提:
drbd设备名,以及drbd设备的挂载点都要与对端节点保持一致;
因为我们
定义资源,使用到设备名及挂载点,所以两端的drbd设备名和设备的挂载点都必须保持一致;
如何定义主从资源?
主从资源是一类特殊的克隆资源;
要成为克隆资源,首先必须定义成主资源;
因此,要想定义成主从资源,首先必须定义成主资源。为保证成为主资源的同时,drdb设备可以同时挂载,还需定义Filesystem
clone-max: 在集群中最多能运行多少份克隆资源,默认和集群中的节点数相同;
clone-node-max:每个节点上最多能运行多少份克隆资源,默认是1;
notify:当成功启动或关闭一份克隆资源,要不要通知给其它的克隆资源,可用值为false,true;默认值是true;
globally-unique: 是否为集群中各节点的克隆资源取一个全局唯一名称,用来描述不同的功能,默认为true;
ordered:克隆资源是否按顺序(order)启动,而非一起(parallel)启动,可用值为false,true;默认值是true;
interleave:当对端的实例有了interleave,就可以改变克隆资源或主资源中的顺序约束;
master-max:最多有多少份克隆资源可以被定义成主资源,默认是1;
master-node-max:每个节点上最多有多少份克隆资源可以被提升为主资源,默认是1;
检查node1,node2是否安装了corosync,pacemaker,crmsh,pssh
[root@node1 ~]# rpm -q corosync pacemaker crmsh pssh
corosync-1.4.1-17.el6.x86_64
pacemaker-1.1.10-14.el6.x86_64
crmsh-1.2.6-4.el6.x86_64
pssh-2.3.1-2.el6.x86_64
[root@node1 ~]# ssh node2.ja.com ‘rpm -q corosync pacemaker crmsh pssh‘
corosync-1.4.1-17.el6.x86_64
pacemaker-1.1.10-14.el6.x86_64
crmsh-1.2.6-4.el6.x86_64
pssh-2.3.1-2.el6.x86_64
若未安装,则执行yum -y install corosync pacemaker crmsh pssh
一旦配置成集群资源,就不能让他们自动启动
[root@node1 ~]# umount /drbd/
[root@node1 ~]# drbdadm secondary mystore1
[root@node1 ~]# service drbd stop
[root@node1 ~]# chkconfig drbd off
[root@node1 ~]# ssh node2.ja.com ‘service drbd stop;chkconfig drbd off‘
[root@node1 ~]# cd /etc/corosync/
[root@node1 corosync]# cp corosync.conf.example corosync.conf
corosync.conf修改后的内容如下所示:
[root@node1 corosync]# egrep -v ‘^$|^[[:space:]]*#‘ /etc/corosync/corosync.conf
compatibility: whitetank
totem {
version: 2
secauth: on
threads: 0
interface {
ringnumber: 0
bindnetaddr: 172.16.16.0
mcastaddr: 226.94.16.15
mcastport: 5405
ttl: 1
}
}
logging {
fileline: off
to_stderr: no
to_logfile: yes
to_syslog: no
logfile: /var/log/cluster/corosync.log
debug: off
timestamp: on
logger_subsys {
subsys: AMF
debug: off
}
}
amf {
mode: disabled
}
service {
name: pacemaker
ver: 0
}
aisexec {
user: root
group: root
}
使用corosync-keygen生成认证文件的时候,由于熵池的随机数不够用,可能需要等待较长时间,在下面,我们就采取一种简便易行的方法,在生产环境尽量不要这么用,因为不安全。
[root@node1 corosync]# mv /dev/random /dev/h
[root@node1 corosync]# ln /dev/urandom /dev/random
[root@node1 corosync]# corosync-keygen
Corosync Cluster Engine Authentication key generator.
Gathering 1024 bits for key from /dev/random.
Press keys on your keyboard to generate entropy.
Writing corosync key to /etc/corosync/authkey.
[root@node1 corosync]# rm -rf /dev/random
[root@node1 corosync]# mv /dev/h /dev/random
[root@node1 corosync]# ll authkey corosync.conf
-r-------- 1 root root 128 Apr 28 17:23 authkey
-rw-r--r-- 1 root root 708 Apr 28 13:51 corosync.conf
[root@node1 corosync]# scp -p authkey corosync.conf node2.ja.com:/etc/corosync/
验证对端认证文件和主配置文件的权限,是否保持不变
[root@node1 corosync]# ssh node2.ja.com ‘ls -l /etc/corosync/{authkey,corosync.conf}‘
-r-------- 1 root root 128 Apr 28 17:23 /etc/corosync/authkey
-rw-r--r-- 1 root root 708 Apr 28 13:51 /etc/corosync/corosync.conf
启动corosync服务
[root@node1 corosync]# service corosync start
[root@node1 corosync]# ssh node2.ja.com ‘service corosync start‘
现在就一切正常了
[root@node1 corosync]# crm status
Last updated: Mon Apr 28 18:20:41 2014
Last change: Mon Apr 28 18:16:01 2014 via crmd on node1.ja.com
Stack: classic openais (with plugin)
Current DC: node2.ja.com - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
1 Resources configured
Online: [ node1.ja.com node2.ja.com ]
[root@node2 drbd.d]# crm status
Last updated: Mon Apr 28 06:19:36 2014
Last change: Mon Apr 28 18:16:01 2014 via crmd on node1.ja.com
Stack: classic openais (with plugin)
Current DC: node2.ja.com - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
1 Resources configured
Online: [ node1.ja.com node2.ja.com ]
[root@node1 ~]# crm
crm(live)# configure
crm(live)configure# property stonith-enable=false
crm(live)configure# property no-quorum-policy=ignore
crm(live)configure# rsc_defaults resource-stickiness=100
crm(live)configure# verify
crm(live)configure# commit
crm(live)configure# show
crm(live)resource# cd
crm(live)# exit
bye
[root@node1 ~]# crm
crm(live)# ra
crm(live)ra# classes
lsb
ocf / heartbeat linbit pacemaker
service
stonith
crm(live)ra# list ocf heartbeat
CTDB Dummy Filesystem IPaddr IPaddr2 IPsrcaddr
LVM MailTo Route SendArp Squid VirtualDomain
Xinetd apache conntrackd dhcpd ethmonitor exportfs
mysql mysql-proxy named nfsserver nginx pgsql
postfix rsyncd rsyslog slapd symlink tomcat
crm(live)ra# list ocf pacemaker
ClusterMon Dummy HealthCPU HealthSMART Stateful SysInfo
SystemHealth controld ping pingd remote
crm(live)ra# list ocf linbit
drbd
crm(live)ra# meta ocf:linbit:drbd
crm(live)ra# cd
crm(live)# configure
crm(live)configure# primitive mysqlstore2 ocf:linbit:drbd params drbd_resource=mystore1 op monitor role=Master intrval=30s timeout=20s op mointor role=Slave interval=60s timeout=20s op start timeout=240s op stop timeout=100s
crm(live)configure# verify
crm(live)configure# master ms_mysqlstore1 mysqlstore meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify="True"
crm(live)configure# verify
crm(live)configure# commit
crm(live)configure# show
crm(live)configure# cd
crm(live)# node standby node1.ja.com
此时发现node2自动提升为主的
crm(live)# status
让node1再上线,发现node1,是从的;node2还是主的
crm(live)# node online node1.ja.com
为主节点定义文件系统资源
# crm
crm(live)# configure
crm(live)configure# primitive WebFS ocf:heartbeat:Filesystem params device="/dev/drbd0" directory="/www" fstype="ext3"
crm(live)configure# colocation WebFS_on_MS_webdrbd inf: WebFS MS_Webdrbd:Master
crm(live)configure# order WebFS_after_MS_Webdrbd inf: MS_Webdrbd:promote WebFS:start
crm(live)configure# verify
crm(live)configure# commit
查看集群中资源的运行状态:
crm status
============
Last updated: Fri Jun 17 06:26:03 2011
Stack: openais
Current DC: node2.a.org - partition with quorum
Version: 1.0.11-1554a83db0d3c3e546cfd3aaff6af1184f79ee87
2 Nodes configured, 2 expected votes
2 Resources configured.
============
Online: [ node2.a.org node1.a.org ]
Master/Slave Set: MS_Webdrbd
Masters: [ node2.a.org ]
Slaves: [ node1.a.org ]
WebFS (ocf::heartbeat:Filesystem): Started node2.a.org
由上面的信息可以发现,此时WebFS运行的节点和drbd服务的Primary节点均为node2.a.org;我们在node2上复制一些文件至/www目录(挂载点),而后在故障故障转移后查看node1的/www目录下是否存在这些文件。
# cp /etc/rc./rc.sysinit /www
下面我们模拟node2节点故障,看此些资源可否正确转移至node1。
以下命令在Node2上执行:
# crm node standby
# crm status
============
Last updated: Fri Jun 17 06:27:03 2011
Stack: openais
Current DC: node2.a.org - partition with quorum
Version: 1.0.11-1554a83db0d3c3e546cfd3aaff6af1184f79ee87
2 Nodes configured, 2 expected votes
2 Resources configured.
============
Node node2.a.org: standby
Online: [ node1.a.org ]
Master/Slave Set: MS_Webdrbd
Masters: [ node1.a.org ]
Stopped: [ webdrbd:0 ]
WebFS (ocf::heartbeat:Filesystem): Started node1.a.org
由上面的信息可以推断出,node2已经转入standby模式,其drbd服务已经停止,但故障转移已经完成,所有资源已经正常转移至node1。
在node1可以看到在node2作为primary节点时产生的保存至/www目录中的数据,在node1上均存在一份拷贝。
让node2重新上线:
# crm node online
[root@node2 ~]# crm status
============
Last updated: Fri Jun 17 06:30:05 2011
Stack: openais
Current DC: node2.a.org - partition with quorum
Version: 1.0.11-1554a83db0d3c3e546cfd3aaff6af1184f79ee87
2 Nodes configured, 2 expected votes
2 Resources configured.
============
Online: [ node2.a.org node1.a.org ]
Master/Slave Set: MS_Webdrbd
Masters: [ node1.a.org ]
Slaves: [ node2.a.org ]
WebFS (ocf::heartbeat:Filesystem): Started node1.a.org
本文出自 “Enjoy the process” 博客,请务必保留此出处http://1757513075.blog.51cto.com/8607255/1405843
声明:以上内容来自用户投稿及互联网公开渠道收集整理发布,本网站不拥有所有权,未作人工编辑处理,也不承担相关法律责任,若内容有误或涉及侵权可进行投诉: 投诉/举报 工作人员会在5个工作日内联系你,一经查实,本站将立刻删除涉嫌侵权内容。