首页 > 代码库 > Linux 集群的deartbeat与drbd服务

Linux 集群的deartbeat与drbd服务

集群的deartbeatdrbd服务

我们用到的集群系统主要就2种:

高可用(High Availability)HA集群, 使用Heartbeat实现;也会称为”双机热备”, “双机互备”, “双机”。
负载均衡群集(Load Balance Cluster),使用Linux Virtual Server(LVS)实现;

heartbeat (Linux-HA)的工作原理:heartbeat最核心的包括两个部分,心跳监测部分和资源接管部分,心跳监测可以通过网络链路和串口进行,而且支持冗 余链路,它们之间相互发送报文来告诉对方自己当前的状态,如果在指定的时间内未受到对方发送的报文,那么就认为对方失效,这时需启动资源接管模块来接管运 行在对方主机上的资源或者服务。

 

需要安装的包:

heartbeat-3.0.4-2.el6.x86_64.rpm        

heartbeat-libs-3.0.4-2.el6.x86_64.rpm

heartbeat-devel-3.0.4-2.el6.x86_64.rpm  

ldirectord-3.9.5-3.1.x86_64.rpm

 

步骤:

一。将rpm包安装在server1server2的一个目录中,

server1中切到刚才的rpm包所在的目录中,安装

yum install * -y##安装所有的rpm

[root@server1 heartbeat]# cd /etc/ha.d/

[root@server1 ha.d]# cp /usr/share/doc/heartbeat-3.0.4/{authkeys,ha.cf,haresources} .

[root@server1 ha.d]# ls

authkeys  ha.cf  harc  haresources  rc.d  README.config  resource.d  shellfuncs

[root@server1 ha.d]# vim ha.cf

48 keepalive 2

56 deadtime 30

71 initdead 60

76 udpport 12345

91 bcast   eth0            # Linux

113 #mcast eth0 225.0.0.1 694 1 0

121 #ucast eth0 1

157 auto_failback on

211 node    server12-10.example.com

212 node    server12-20.example.com

220 ping 172.25.50.250

253 respawn hacluster /usr/lib64/heartbeat/ipfail

259 apiauth ipfail gid=haclient uid=hacluster

 

[root@server1 ha.d]# vim authkeys

 23 auth 1

 24 1 crc

 25 #2 sha1 HI!

 26 #3 md5 Hello!

 

[root@server1 ha.d]# vim haresources

150 server1.example.com IPaddr::172.25.50.100/24/eth0 httpd

 

[root@server1 ha.d]# chmod 600 authkeys

[root@server1 ha.d]# ll -d authkeys

-rw------- 1 root root 643 2月  17 15:06 authkeys

[root@server1 ha.d]# scp ha.cf haresources authkeys 172.25.50.20:/etc/ha.d/

root@172.25.50.20‘s password:

ha.cf                                                100%   10KB  10.3KB/s   00:00    

haresources                                          100% 5961     5.8KB/s   00:00    

authkeys                                             100%  643     0.6KB/s   00:00    

[root@server1 ha.d]# /etc/init.d/heartbeat start

Starting High-Availability services: INFO:  Resource is stopped

Done.

 

server2上启动heartbeat服务

[root@server2 ha.d]# /etc/init.d/heartbeat start

Starting High-Availability services: INFO:  Resource is stopped

Done.

 

测试:

server1var/www/html/目录下:

[root@server1 html]# vim index.html

www.server1.example

server2

[root@server2 ha.d]# cd /var/www/html/

[root@server2 html]# ls

[root@server2 html]# vim index.html

www.server2.example.com

 

server1

[root@server1 html]# ip addr show

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

    link/ether 52:54:00:06:13:fa brd ff:ff:ff:ff:ff:ff

    inet 172.25.50.10/24 brd 172.25.50.255 scope global eth0

    inet 172.25.50.100/24 brd 172.25.50.255 scope global secondary eth0

    inet6 fe80::5054:ff:fe06:13fa/64 scope link

       valid_lft forever preferred_lft forever

说明http服务是在servre1上启动的

在真机上

[root@real50 Desktop]# curl 172.25.50.100

www.server1.example

##访问的内容是在ip100的主机的http默认发布目录上的内容

 

Server1上:关闭心跳

 

[root@server1 html]# /etc/init.d/heartbeat stop

Stopping High-Availability services: Done.

这时侯。100这个ip就到server2这个主机上

 

真机:

[root@real50 Desktop]# curl 172.25.50.100

www.server2.example.com

server1上的heartbeat服务启动后,100这个ip就重新回切到server1主机上

 

 

测试2:关闭httpd服务

(开启heartbeat服务,)

[root@server1 ha.d]# /etc/init.d/httpd stop

Stopping httpd:                                            [  OK  ]

 

[root@server1 ha.d]# ip addr show

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

    link/ether 52:54:00:51:aa:19 brd ff:ff:ff:ff:ff:ff

    inet 172.25.50.10/24 brd 172.25.12.255 scope global eth0

    inet 172.25.50.100/24 brd 172.25.12.255 scope global secondary eth0

    inet6 fe80::5054:ff:fe51:aa19/64 scope link

       valid_lft forever preferred_lft forever

[root@real Desktop]# curl 172.25.12.100

curl: (7) Failed connect to 172.25.12.100:80; Connection refused

[root@real Desktop]# arp -an | grep 172.25.12.100

? (172.25.12.100) at 52:54:00:51:aa:19 [ether] on br0

 

##############drbd服务####################

首先先生成rpm

Server1

解压事先发送到server1上的bdrm-8.4.2.tar.gz

[root@server1 mnt]# tar zxf bdrm-8.4.2.tar.gz

[root@server1 mnt]# cd drbd-8.4.2

[root@server1 drbd-8.4.2]# yum install gcc -y

[root@server1 drbd-8.4.2]# yum install flex -y

[root@server1 drbd-8.4.2]yum install rpm-build -y

[root@server1 drbd-8.4.2]yum install kernel-devel -y

[

 

root@server1 drbd-8.4.2]./configure --enable-spec

生成drbd.spec这个文件

[root@server1 drbd-8.4.2]./configure --enable-spec --with-km

生成drbd-km.spec 文件

[root@server1 drbd-8.4.2]cp /mnt/drbd-8.4.2.tar.gz /root/rpmbuild/SOURCES/

[root@server1 drbd-8.4.2]rpmbuild -bb drbd.spec#编译drbd.spec文件

[root@server1 drbd-8.4.2]# rpmbuild -bb drbd-km.spec#编译drbd-km.spec 文件

[root@server1 drbd-8.4.2]cd /root/rpmbuild/RPMS/x86_64/

[root@server1 drbd-8.4.2]# cd /root/rpmbuild/RPMS/x86_64/

[root@server1 x86_64]# ls

drbd-8.4.2-2.el6.x86_64.rpm

drbd-bash-completion-8.4.2-2.el6.x86_64.rpm

drbd-heartbeat-8.4.2-2.el6.x86_64.rpm

drbd-km-2.6.32_431.el6.x86_64-8.4.2-2.el6.x86_64.rpm

drbd-pacemaker-8.4.2-2.el6.x86_64.rpm

drbd-udev-8.4.2-2.el6.x86_64.rpm

drbd-utils-8.4.2-2.el6.x86_64.rpm

drbd-xen-8.4.2-2.el6.x86_64.rpm

 

 rpm -ivh *

 scp *  172.25.30.2:  --> 然后在 server2 上执行: cd  --> rpm -ivh *

安装以上软件后就可以作接下来的实验拉

 

############存储#####

server1server2上同时添加一块儿4G虚拟硬盘

server1 sevrer2上执行fdisk -l 查看添加磁盘路径,这里是/dev/vda

[root@server1 drbd.d]# fdisk -l

Disk /dev/vda: 4294 MB, 4294967296 bytes

16 heads, 63 sectors/track, 8322 cylinders

Units = cylinders of 1008 * 512 = 516096 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

 

切入: cd /etc/drbd.d/

[root@server1 drbd.d]# ls

global_common.conf

 

[root@server1 drbd.d]# vim lyitx.res

resource lyitx {

meta-disk internal;

device /dev/drbd1;

syncer {

verify-alg sha1;

}

on server1.example.com {

disk /dev/vda;

address 172.25.50.10:7789;

}

on server2.example.com {

disk /dev/vda;

address 172.25.50.20:7789;

}

}

[root@server1 drbd.d]# scp lyitx.res 172.25.50.20:/etc/drbd.d/#复制到这个文件夹

[root@server1 drbd.d]# drbdadm create-md lyitx#server1server2上同时执行

Writing meta data...

initializing activity log

NOT initializing bitmap

New drbd meta data block successfully created.

[root@server1 drbd.d]# /etc/init.d/drbd start

Starting DRBD resources: [

     create res: lyitx

   prepare disk: lyitx

    adjust disk: lyitx

     adjust net: lyitx

]

..........

***************************************************************

 DRBD‘s startup script waits for the peer node(s) to appear.

 - In case this node was already a degraded cluster before the

   reboot the timeout is 0 seconds. [degr-wfc-timeout]

 - If the peer was available before the reboot the timeout will

   expire after 0 seconds. [wfc-timeout]

   (These values are for resource ‘lyitx‘; 0 sec -> wait forever)

 To abort waiting enter ‘yes‘ [  14]:

.[root@server1 drbd.d]# cat /proc/drbd #查看该主机是否能够挂载

version: 8.4.2 (api:1/proto:86-101)

GIT-hash: 7ad5f850d711223713d6dcadc3dd48860321070c build by root@server1.example.com, 2017-02-17 16:28:52

 

 1: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----

ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:4194140

[root@server1 drbd.d]# drbdadm primary lyitx --force#强制将当前主机设置primary模式

[root@server1 drbd.d]# cat /proc/drbd

 cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r---n-

[root@server1 drbd.d]# mkfs.ext4 /dev/drbd1#格式化成ext4,否则无法挂载,

[root@server1 drbd.d]# cd /mnt/

[root@server1 mnt]# mount /dev/drbd1 /mnt#挂载

[root@server1 /]# cd /mnt

[root@server1 mnt]# ls

lost+found

[root@server1 mnt]# vim index.html#写的一个测试页面

[root@server1 mnt]# ls

index.html  lost+found

[root@server1 ~]# cd

[root@server1 ~]# umount /mnt#卸载

[root@server1 ~]# drbdadm secondary lyitx#将当前主机设置成secondary模式,只有这样其他主机才能够设置成primary

 

servre2

[root@server2 drbd.d]# drbdadm primary lyitx#设置成primary模式

[root@server2 drbd.d]# cat /proc/drbd

1: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r---

[root@server2 drbd.d]# mount /dev/drbd1 /mnt#挂载,这里就无需格式化,

[root@server2 drbd.d]# cd /mnt

[root@server2 mnt]# ls

index.html  lost+found

[root@server2 mnt]# cat index.html

server1.example.com

server2上看到server1的测试页面,说明测试成功

 

 

 

#######heartbeat+mysql###################

heartbeat服务和mysql服务结合在一起,通过heartbeat服务实现双机热备,

步骤:

关闭两台虚拟机的 heartbeat服务

[root@server1 mnt]# yum install mysql-server -y

[root@server1 mnt]#drbdadm primary example

[root@server1 mnt]mount /dev/drbd1 /mnt

[root@server1 mnt]cd /var/lib/mysql/

/etc/init.d/mysqld start

/etc/init.d/mysqld stop

[root@server1 mysql]# cp -r * /mnt/

[root@server1 mysql]# cd /mnt/

[root@server1 mnt]# ls

ibdata1  ib_logfile0  ib_logfile1  index.html  lost+found  mysql  mysql.sock  test

[root@server1 mnt]# rm -fr mysql.sock #开启mysql服务后,会生成mysql.sock缓存文件,关闭服务后删除这个文件。(当mysql启动时,会再次生成)

 

[root@server1 mnt]# cd

[root@server1 ~]# umount /mnt/ ##这时mysql服务必须关闭,否则无法卸载

[root@server1 ~]# mount /dev/drbd1 /var/lib/mysql/

[root@server1 ~]# chown mysql.mysql /var/lib/mysql/ -R

[root@server1 mysql]# /etc/init.d/mysqld start

正在启动 mysqld:                                          [确定]

[root@server1 mysql]# cd

[root@server1 ~]# /etc/init.d/mysqld stop

停止 mysqld:                                              [确定]

[root@server1 ~]# umount /var/lib/mysql/

[root@server1 ~]# drbdadm secondary lyitx

 

server2

[root@server2 /]# yum install mysql-server -y

[root@server2 /]# drbdadm primary lyitx

[root@server2 /]# mount /dev/drbd1 /var/lib/mysql/

[root@server2 /]# df##这时能够看到挂载的mysql

Filesystem                   1K-blocks    Used Available Use% Mounted on

/dev/mapper/VolGroup-lv_root  19134332 1106972  17055380   7% /

tmpfs                           510200       0    510200   0% /dev/shm

/dev/sda1                       495844   33458    436786   8% /boot

/dev/drbd1                     4128284   95192   3823388   3% /var/lib/mysql

[root@server2 /]# /etc/init.d/mysqld start

[root@server2 /]# /etc/init.d/mysqld stop

[root@server2 /]# umount /var/lib/mysql/

[root@server2 /]# drbdadm secondary lyitx

 

[root@server1 ~]# vim /etc/ha.d/haresources

在最后一行修改:

server1.example.com IPaddr::172.25.50.100/24/eth0 drbddisk::lyitx Filesystem::/dev/drbd1::/var/lib/mysql::ext4 mysqld

[root@server1 ~]# scp /etc/ha.d/haresources 172.25.50.20:/etc/ha.d/

root@172.25.50.20‘s password:

haresources                                                       100% 6023     5.9KB/s   00:00    

[root@server1 ~]# /etc/init.d/heartbeat start

Starting High-Availability services: INFO:  Resource is stopped

Done.

[root@server2 ~]# /etc/init.d/heartbeat start

Starting High-Availability services: INFO:  Resource is stopped

Done.

 

测试:server1: /etc/init.d/heartbeat stop --> server2上进行 df 查看,若查看到,则表示成功


Linux 集群的deartbeat与drbd服务