首页 > 代码库 > mysql 集群的搭建
mysql 集群的搭建
参考:http://blog.csdn.net/zklth/article/details/7522677
一、环境准备:
注:所有节点需要将其防墙关闭
/etc/init.d/iptables status #查看防火墙状态
/etc/init.d/iptables stop #关闭防火墙
1. 软件下载:
ftp://mirror.switch.ch/mirror/mysql/Downloads/MySQL-Cluster-7.1/
选择 mysql-cluster-gpl-7.1.19.tar.gz 下载
2.硬件需要五台物理节点(虚拟机上试过不行)
3.软件安装:在所有节点上做如下操作
######################################################################################################
groupadd mysql #创建mysql组
useradd -g mysql mysql #创建mysql用户
#创建目录
mkdir -p /opt/mysql-cluster
mkdir -p /opt/mysql-cluster/etc
mkdir -p /opt/mysql-cluster/tmp
mkdir -p /opt/mysql-cluster/data
#解压软件包
tar zxvf mysql-cluster-gpl-7.1.19.tar.gz
#配置安装项
cd mydql-cluster-gpl-7.1.19
./configure --prefix=/opt/mysql-cluster --with-charset=gbk --with-collation=gbk_chinese_ci --with-client-ldflags=-all-static -with-mysqld-ldflags=-all-static --enable-assembler --with-extra-charsets=complex --enable-thread-safe-client --with-big-tables --with-readline --with-ssl --with-embedded-server --enable-local-infile --with-unix-socket-path=/opt/mysql-cluster/tmp/mysql.sock --sysconfdir=/opt/mysql-cluster/etc --without-debug --with-mysqld-user=mysql --with-plugins=max
#安装
make && make install
######################################################################################################
二、管理节点配置
1.创建管理节点的集群配置文件config.ini
[root@sg204 mysql-cluster]# vi etc/config.ini
[NDBD DEFAULT]
NoOfReplicas= 2
[MYSQLD DEFAULT]
[NDB_MGMD DEFAULT]
DataDir=/opt/mysql-cluster/data #数据存放目录
[NDB_MGMD]
HostName= 10.30.9.204
[NDBD]
HostName= 10.30.9.206
DataDir=/opt/mysql-cluster/data
[NDBD]
HostName= 10.30.9.207
Datadir=/opt/mysql-cluster/data
[MYSQLD]
HostName= 10.30.9.208
[MYSQLD]
HostName= 10.30.9.211
2.在管理节点上启动ndb_mgmd服务
[root@sg204 mysql-cluster]# libexec/ndb_mgmd -f etc/config.ini
MySQL Cluster Management Server mysql-5.1.56 ndb-7.1.19
这时ndb_mgmd服务已经启动了,可以 用ps命令查看其进程:
[root@sg204 mysql-cluster]# ps -ef | grep ndb
root 23505 1 0 May11 ? 00:00:00 libexec/ndb_mgmd -f etc/config.ini
root 24692 24238 0 01:29 pts/1 00:00:00 grep ndb
用ndb_mgm>show 命令查看其状态:
[root@sg204 mysql-cluster]# bin/ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 (not connected, accepting connect from 10.30.9.206)
id=3 (not connected, accepting connect from 10.30.9.207)
[ndb_mgmd(MGM)] 1 node(s)
id=1 @10.30.9.204 (mysql-5.1.56 ndb-7.1.19)
[mysqld(API)] 2 node(s)
id=4 (not connected, accepting connect from 10.30.9.208)
id=5 (not connected, accepting connect from 10.30.9.211)
从显示信息可以 看出两个数据节点和两个SQL节点都未启动。
ndb_mgm> exit #退出
三、数据节点配置
在两个数据节点上执行以下的1 2 3 4 步操作
1.创建数据字典
指定数据目录和用户
[root@sg206 mysql-cluster]# bin/mysql_install_db --user=mysql --datadir=/opt/mysql-cluster/data --basedir=/opt/mysql-cluster
WARNING: The host ‘sg206‘ could not be looked up with resolveip.
This probably means that your libc libraries are not 100 % compatible
with this binary MySQL version. The MySQL daemon, mysqld, should work
normally with the exception that host name resolving will not work.
This means that you should use IP addresses instead of hostnames
when specifying MySQL privileges !
Installing MySQL system tables...
OK
Filling help tables...
OK
To start mysqld at boot time you have to copy
support-files/mysql.server to the right place for your system
PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER !
To do so, start the server, then issue the following commands:
/opt/mysql-cluster/bin/mysqladmin -u root password ‘new-password‘
/opt/mysql-cluster/bin/mysqladmin -u root -h sg206 password ‘new-password‘
Alternatively you can run:
/opt/mysql-cluster/bin/mysql_secure_installation
which will also give you the option of removing the test
databases and anonymous user created by default. This is
strongly recommended for production servers.
See the manual for more instructions.
You can start the MySQL daemon with:
cd /opt/mysql-cluster ; /opt/mysql-cluster/bin/mysqld_safe &
You can test the MySQL daemon with mysql-test-run.pl
cd /opt/mysql-cluster/mysql-test ; perl mysql-test-run.pl
Please report any problems with the /opt/mysql-cluster/scripts/mysqlbug script!
2.创建my.cnf配置文件
[root@sg207 mysql-cluster]# vim etc/my.cnf
[mysqld]
basedir= /opt/mysql-cluster
datadir= /opt/mysql-cluster/data
user= mysql
port= 3306
socket= /opt/mysql-cluster/tmp/mysql.sock
ndbcluster
#管理节点IP
ndb-connectstring=10.30.9.204
[mysql_cluster]
#管理节点IP
ndb-connectstring=10.30.9.204
3.启动nbdb服务
ndbd服务第一次启动时需要加上--initial选项,以后不需要
[root@sg207 mysql-cluster]# libexec/ndbd --initial
2012-05-12 03:53:15 [ndbd] INFO -- Angel connected to ‘10.30.9.204:1186‘
2012-05-12 03:53:15 [ndbd] INFO -- Angel allocated nodeid: 2
4.将mysql-cluster目录下的所有内容的拥有者更换为mysql用户,mysql组
[root@sg207 mysql-cluster]# chown -R mysql:mysql *
这时两个数据结点nbdb服务都启动后,再在管理节点或者数据节点均可查看集群状态,
可以看出两个数据节点上的ndbd服务都启动了
################################################
[root@sg204 mysql-cluster]# bin/ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @10.30.9.206 (mysql-5.1.56 ndb-7.1.19, Nodegroup: 0, Master)
id=3 @10.30.9.207 (mysql-5.1.56 ndb-7.1.19, Nodegroup: 0)
[ndb_mgmd(MGM)] 1 node(s)
id=1 @10.30.9.204 (mysql-5.1.56 ndb-7.1.19)
[mysqld(API)] 2 node(s)
id=4 (not connected, accepting connect from 10.30.9.208)
id=5 (not connected, accepting connect from 10.30.9.211)
###############################################
四,SQL节点配置
对两个SQL节点执行三中的1 2 4 步及下面的 1步
1.启动mysqld进程
[root@sg208 mysql-cluster]# bin/mysqld_safe --defaults-file=/opt/mysql-cluster/etc/my.cnf --basedir=/opt/mysql-cluster --datadir=/opt/mysql-cluster/data --user=mysql &
若启动失败,则查看错误日志/opt/mysql-cluster/data/sg208.err
再在管理节点或者数据节点均可查看集群状态
################################################
[root@sg204 mysql-cluster]# bin/ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @10.30.9.206 (mysql-5.1.56 ndb-7.1.19, Nodegroup: 0, Master)
id=3 @10.30.9.207 (mysql-5.1.56 ndb-7.1.19, Nodegroup: 0)
[ndb_mgmd(MGM)] 1 node(s)
id=1 @10.30.9.204 (mysql-5.1.56 ndb-7.1.19)
[mysqld(API)] 2 node(s)
id=4 @10.30.9.208 (mysql-5.1.56 ndb-7.1.19)
id=5 @10.30.9.211 (mysql-5.1.56 ndb-7.1.19)
################################################
可以 看到所有结点全部正常。
五、在两个SQL结点上测试数据同步
在10.30.9.208SQL节点上登录数据库
[root@sg208 mysql-cluster]#bin/mysqladmin -uroot password ‘111111‘ #修改root用户的密码为111111
[root@sg208 mysql-cluster]#bin/mysql -uroot -p #连接mysql cluster
Enter password: #输入密码111111
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 4
Server version: 5.1.56-ndb-7.1.19 Source distribution
Copyright (c) 2000, 2010, Oracle and/or its affiliates. All rights reserved.
This software comes with ABSOLUTELY NO WARRANTY. This is free software,
and you are welcome to modify and redistribute it under the GPL v2 license
Type ‘help;‘ or ‘\h‘ for help. Type ‘\c‘ to clear the current input statement.
mysql> create database orcl #创建orcl 数据库
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| ndbinfo |
| orcl |
| test |
+--------------------+
5 rows in set (0.00 sec)
10.30.9.208上orcl已经创建好
再在10.30.9.211上以同样方法登录数据库
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| ndbinfo |
| orcl |
| test |
+--------------------+
5 rows in set (0.00 sec)
可以 看到10.30.9.211上也有orcl库
在10.30.9.208上创建表并插入数据
use orcl;
create table name(
id int(4) auto_increment not null primary key,
xm char(8),
xb char(2),
csny date) engine=ndbcluster ; # engine=ndbcluster很重要,不加的话表数据不能同步。
insert into name values(‘‘,‘jack‘,‘m‘,‘1900-01-01‘);
insert into name values(‘‘,‘rose‘,‘f‘,‘1904-01-01‘);
这时在10.30.9.211上
mysql> use orcl
Database changed
mysql> select * from name;
+----+------+------+------------+
| id | xm | xb | csny |
+----+------+------+------------+
| 1 | jack | m | 1900-01-01 |
| 2 | rose | f | 1903-01-01 |
+----+------+------+------------+
2 rows in set (0.00 sec)
可以看出数据同步了。
集群的 关闭 :
ndb各节点的关闭和启动
节点的启动和关闭是有顺序的,启动的时候应该先启动管理节点,然后是数据节点,最后是mysql节点。关闭节点的时候应该先关闭mysql节点,然后再通过管理节点关闭所有的管理节点和数据节点。
启动:
/usr/bin/ndb_mgmd -f /usr/local/mysql/mysql-cluster/config.ini(启动管理节点)
/usr/bin/ndbd --initial(启动数据节点,只有第一次启动或者有的配置变更后才需要--initial参数)
/etc/rc.d/init.d/mysqld start(启动mysql节点)
停止:
bin/mysqladmin -u root -p shutdown
ndb_mgm -e shutdown
集群的启动:
1.启动管理节点:
bin/ndb_mgmd -f /opt/mysql-cluster/etc/config.ini --reload --configdir=/opt/mysql-cluster #修改配置文件后要加上--reload才会生效
2.启动数据节点:
bin/ndbd --connect-string="nodeid=2;host=172.16.48.204:1186" #各个数据节点的nodeid可以在管理节点上show看到
3.启动SQL节点:
bin/mysqld_safe --defaults-file=/opt/mysql-cluster/etc/my.cnf --basedir=/opt/mysql-cluster --datadir=/opt/mysql-cluster/data --user=mysql &
附:
config.ini文件详细配置可参照:http://www.linuxidc.com/Linux/2010-06/26640p2.htm
#scp -P 4400 -r root@10.0.24.103:/home2/backup/ /home/mover00/shadow_bak/sites/
拷贝远程(10.0.24.103)的/home2/backup/ 到本地的 /home/mover00/shadow_bak/sites/
#scp -P 4400 -r /home2/backup/ root@10.0.24.99:/home/mover00/shadow_bak/sites/
拷贝本地的/home2/backup/ 到远程(10.0.24.99)的 /home/mover00/shadow_bak/sites/
mysql 集群的搭建