首页 > 代码库 > redis 3.2.3 集群部署
redis 3.2.3 集群部署
部署环境:
1.下载redis安装包
#在各个节点上分别下载redis
cd /opt wget http://download.redis.io/releases/redis-3.2.3.tar.gz tar zxvf redis-3.2.3.tar.gz
2.编译安装
#确认有没有安装make命令,没有的话需要执行以下操作,有的话就不用了 #安装: yum -y install gcc automake autoconf libtool make #安装g++: yum install gcc gcc-c++ #安装redis cd redis-3.2.3 make && make install
3.创建目录存放redis节点的配置文件
#根据此前的部署要求,此操作在172.168.1.80 mkdir -pv /opt/redis-3.2.3/cluster/700{0,1} #此操作在172.168.1.81 mkdir -pv /opt/redis-3.2.3/cluster/700{2,3} #此操作在172.168.1.82 mkdir -pv /opt/redis-3.2.3/cluster/700{2,3}
#复制redis.conf到各个节点目录
cp redis-3.2.4/redis.conf /opt/redis-cluster/7000
4.然后编辑 redis.conf修改每个节点的配置,修改以下属性
daemonize yes #redis后台运行 pidfile /var/run/redis_7000.pid #pidfile文件对应7000,7002,7003 port 7000 #端口7000,7002,7003 cluster-enabled yes #开启集群 把注释#去掉 cluster-config-file nodes_7000.conf #集群的配置 配置文件首次启动自动生成 7000,7001,7002 appendonly yes #aof日志开启 有需要就开启,它会每次写操作都记录一条日志
5.启动节点
#分别在80/81/82上执行 redis-server redis_cluster/7000/redis.conf #7000为对应节点下的目录
6.检查 redis 启动情况
[root@localhost ]# ps aux | grep redis root 28987 0.0 0.1 133524 7568 ? Ssl 16:41 0:00 redis-server 172.168.1.81:7002 [cluster] root 28992 0.0 0.1 133524 7568 ? Ssl 16:41 0:00 redis-server 172.168.1.81:7003 [cluster] root 28998 0.0 0.0 103252 812 pts/1 S+ 16:42 0:00 grep redis
#查看端口监听
ss -tnlp | grep redis
7.将redis节点加入集群
#执行以下命令将各节点加入集群
redis-trib.rb create --replicas 1 172.168.1.80:7000 172.168.1.80:7001 172.168.1.81:7002 172.168.1.81:7003 172.168.1.82:7004 172.168.1.82:7005
#其中
--replicas 1 :自动为每一个master节点分配一个slave节点
#运行以上命令时,必须要先安装ruby环境,因为这个命令时ruby写的
#编译安装命令如下:
yum -y install ruby ruby-devel rubygems rpm-build
gem install redis
#重新运行命令如果出现以下内容则表示集群安装成功,记得中途还需输入yes root># redis-trib.rb create --replicas 1 172.168.1.80:7000 172.168.1.80:7001 172.168.1.81:7002 172.168.1.81:7003 172.168.1.82:7004 172.168.1.82:7005 >>> Creating cluster >>> Performing hash slots allocation on 6 nodes... Using 3 masters: 172.168.1.80:7000 172.168.1.81:7002 172.168.1.82:7004 Adding replica 172.168.1.81:7003 to 172.168.1.80:7000 Adding replica 172.168.1.80:7001 to 172.168.1.81:7002 Adding replica 172.168.1.82:7005 to 172.168.1.82:7004 M: 24337aaa4932d1faea25e5104b0eb56fb7f45ac2 172.168.1.80:7000 slots:0-5460 (5461 slots) master S: 14bcba9c9019eabb2895b233b50c7f100f04b2af 172.168.1.80:7001 replicates ffee2b7a207d67d8c05b66d0bf84acd7b2f44e92 M: ffee2b7a207d67d8c05b66d0bf84acd7b2f44e92 172.168.1.81:7002 slots:5461-10922 (5462 slots) master S: 41827f85c14f13c08664fe924e4536da18a3c8c8 172.168.1.81:7003 replicates 24337aaa4932d1faea25e5104b0eb56fb7f45ac2 M: 2ce87c94d257a74b6adc7fd5b20cbdeb51b81b73 172.168.1.82:7004 slots:10923-16383 (5461 slots) master S: c1db158638a4929a17fa25a7f4a6e1e8493f192a 172.168.1.82:7005 replicates 2ce87c94d257a74b6adc7fd5b20cbdeb51b81b73 Can I set the above configuration? (type ‘yes‘ to accept): yes >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join..... >>> Performing Cluster Check (using node 172.168.1.80:7000) M: 24337aaa4932d1faea25e5104b0eb56fb7f45ac2 172.168.1.80:7000 slots:0-5460 (5461 slots) master M: 14bcba9c9019eabb2895b233b50c7f100f04b2af 172.168.1.80:7001 slots: (0 slots) master replicates ffee2b7a207d67d8c05b66d0bf84acd7b2f44e92 M: ffee2b7a207d67d8c05b66d0bf84acd7b2f44e92 172.168.1.81:7002 slots:5461-10922 (5462 slots) master M: 41827f85c14f13c08664fe924e4536da18a3c8c8 172.168.1.81:7003 slots: (0 slots) master replicates 24337aaa4932d1faea25e5104b0eb56fb7f45ac2 M: 2ce87c94d257a74b6adc7fd5b20cbdeb51b81b73 172.168.1.82:7004 slots:10923-16383 (5461 slots) master M: c1db158638a4929a17fa25a7f4a6e1e8493f192a 172.168.1.82:7005 slots: (0 slots) master replicates 2ce87c94d257a74b6adc7fd5b20cbdeb51b81b73 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
8.集群验证
redis-cli -h 172.168.1.81 -c -p 7002
到此安装完成。
redis 3.2.3 集群部署
声明:以上内容来自用户投稿及互联网公开渠道收集整理发布,本网站不拥有所有权,未作人工编辑处理,也不承担相关法律责任,若内容有误或涉及侵权可进行投诉: 投诉/举报 工作人员会在5个工作日内联系你,一经查实,本站将立刻删除涉嫌侵权内容。