首页 > 代码库 > redis-cluster集群扩容以及扩容client读写数据影响的探究
redis-cluster集群扩容以及扩容client读写数据影响的探究
redis-cluster集群扩容以及扩容client读写数据影响的探究
一直以来,从来只是对codis做过slot的动态迁移(同扩容)而且也只是线下环境,而没有对线上的redis-cluster做过扩容和迁移。早就想有空测试一下却一直没有实际去做,然而就在今天收到了产品部的需求,要对线上某个业务线的redis-cluster做扩容。。。 这也正印证佛家那就话,你结下的因,早晚成你的果。(不懂,瞎说的)
借此契机,一探究竟。
测试环境:
主机类型:虚拟机 操作系统:centos6.5_x86_64
配置:1×1 4G 内存
1 构建集群
(具体过程略)
在单台主机上运行6个redis实例(端口:6380-6385)构造一个小集群,如下:
[root@salt-master ~]# redis-cli -c -p 6380 cluster nodes ffb86e1168215b138c5b7a81ad92e44ca7095b54 192.168.11.3:6380 myself,master - 0 0 1 connected 0-5460 88e0cdfb2794816cb9a1ca39b7ad640656d5ef85 192.168.11.3:6382 master - 0 1487335437581 3 connected 10923-16383 a7a1020c36b8b0277c41ac7fc649ed9e81fa1448 192.168.11.3:6384 slave 8b20dd24f0aa2ba05754c4db016e0a29299df24e 0 1487335430478 5 connected 8de9d0da9dfd0db0553c68386cbccdcb58365123 192.168.11.3:6383 slave ffb86e1168215b138c5b7a81ad92e44ca7095b54 0 1487335436632 4 connected e8e6d0e32e0f2ee918795e3a232b9c768b671f39 192.168.11.3:6385 slave 88e0cdfb2794816cb9a1ca39b7ad640656d5ef85 0 1487335435563 6 connected 8b20dd24f0aa2ba05754c4db016e0a29299df24e 192.168.11.3:6381 master - 0 1487335433541 2 connected 5461-1092
从上面的输出可以看出,此集群有3个主节点(master),每个节点1个副本(slave),以及各个节点的IP、端口、ID、16384个slot槽位的分配。
2 模拟数据
为了稍后测试扩容时是否影响读写,我们先在集群里写点数据:
#!/usr/bin/env python # -*- coding: utf-8 -*- from rediscluster import StrictRedisCluster # Requires at least one node for cluster discovery. Multiple nodes is recommended. startup_nodes = [{"host": "192.168.11.3", "port": "6380"}, {"host": "192.168.11.3", "port": "6381"}, {"host": "192.168.11.3", "port": "6382"}, {"host": "192.168.11.3", "port": "6383"}, {"host": "192.168.11.3", "port": "6384"}, {"host": "192.168.11.3", "port": "6385"}] rc = StrictRedisCluster(startup_nodes=startup_nodes, decode_responses=True) # pre_time = time.time() for i in xrange(100000): key = "key_%s" % i value = "value_%s" % i rc.set(key, value) # aft_time = time.time() # print aft_time - pre_time
(PS:我的这个小集群set 10万个key耗时21秒,有兴趣的可以和你自己的对比一下)
补充:
redis-cluster会额外启用一个端口(默认是所监听的port+10000)其他与其它节点通信的总线端口(一个有N个节点的集群,则在每个节点上,都有N-1个进来的连接和N-1个出去的连接与其他节点进行通信)。所以连接上集群的任一节点,即可知晓所有节点。
3 新增资源
还是在这台虚拟机上,我们在启动2个redis实例,作为新增的资源。
新增两个实例端口分别为6386 6387
[root@salt-master conf]# /usr/local/redis-server/bin/redis-server /usr/local/redis-server/conf/redis6386.conf [root@salt-master conf]# /usr/local/redis-server/bin/redis-server /usr/local/redis-server/conf/redis6387.conf
4 扩容集群
将上面新增的2个实例扩容到集群,此时只是集群间相互知道有新的成员了,但新成员上未分配slot(slot集群分配数据的最小单元),所以此时新增节点上不会有任何数据。
4.1 添加主节点
[root@salt-master conf]# /usr/local/redis-server/bin/redis-trib.rb add-node 192.168.11.3:6386 192.168.11.3:6380 >>> Adding node 192.168.11.3:6386 to cluster 192.168.11.3:6380 >>> Performing Cluster Check (using node 192.168.11.3:6380) M: ffb86e1168215b138c5b7a81ad92e44ca7095b54 192.168.11.3:6380 slots:0-5460 (5461 slots) master 1 additional replica(s) M: 88e0cdfb2794816cb9a1ca39b7ad640656d5ef85 192.168.11.3:6382 slots:10923-16383 (5461 slots) master 1 additional replica(s) S: a7a1020c36b8b0277c41ac7fc649ed9e81fa1448 192.168.11.3:6384 slots: (0 slots) slave replicates 8b20dd24f0aa2ba05754c4db016e0a29299df24e S: 8de9d0da9dfd0db0553c68386cbccdcb58365123 192.168.11.3:6383 slots: (0 slots) slave replicates ffb86e1168215b138c5b7a81ad92e44ca7095b54 S: e8e6d0e32e0f2ee918795e3a232b9c768b671f39 192.168.11.3:6385 slots: (0 slots) slave replicates 88e0cdfb2794816cb9a1ca39b7ad640656d5ef85 M: 8b20dd24f0aa2ba05754c4db016e0a29299df24e 192.168.11.3:6381 slots:5461-10922 (5462 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. >>> Send CLUSTER MEET to node 192.168.11.3:6386 to make it join the cluster. [OK] New node added correctly.
解释:
redis-trib.rb add-node 192.168.11.3:6386 192.168.11.3:6380
命令 添加节点 新节点的IP:port 集群任意节点的IP:port(用于集群发现)
同理,为刚添加的主节点增加副本(即添加从节点)
首先查看新增节点的ID:
[root@salt-master conf]# redis-cli -c -p 6380 CLUSTER nodes | grep 6386 301b60cdb455b9ae27b7b562524c0d039e640815 192.168.11.3:6386 master - 0 1487342302506 0 connected
4.2 添加从节点:
redis-trib.rb add-node --slave --master-id 301b60cdb455b9ae27b7b562524c0d039e640815 192.168.11.3:6387 192.168.11.3:6380
查看整个集群状况:
[root@salt-master conf]# redis-cli -c -p 6380 CLUSTER nodes 301b60cdb455b9ae27b7b562524c0d039e640815 192.168.11.3:6386 master - 0 1487342439807 0 connected ffb86e1168215b138c5b7a81ad92e44ca7095b54 192.168.11.3:6380 myself,master - 0 0 1 connected 0-5460 b34e53b4b82fb11043f73819179524d49ce75ead 192.168.11.3:6387 slave 301b60cdb455b9ae27b7b562524c0d039e640815 0 1487342438797 0 connected 88e0cdfb2794816cb9a1ca39b7ad640656d5ef85 192.168.11.3:6382 master - 0 1487342441826 3 connected 10923-16383 a7a1020c36b8b0277c41ac7fc649ed9e81fa1448 192.168.11.3:6384 slave 8b20dd24f0aa2ba05754c4db016e0a29299df24e 0 1487342434759 5 connected 8de9d0da9dfd0db0553c68386cbccdcb58365123 192.168.11.3:6383 slave ffb86e1168215b138c5b7a81ad92e44ca7095b54 0 1487342440816 4 connected e8e6d0e32e0f2ee918795e3a232b9c768b671f39 192.168.11.3:6385 slave 88e0cdfb2794816cb9a1ca39b7ad640656d5ef85 0 1487342443843 6 connected 8b20dd24f0aa2ba05754c4db016e0a29299df24e 192.168.11.3:6381 master - 0 1487342444851 2 connected 5461-10922
可以看到新增的2个节点已经加入集群,但是没有分配slot,所有目前不会接受任何redis client连接请求
5 迁移slot,测试读写
接下来我们重新分配集群的slot,使所有的节点数量均衡,于此同时模拟业务程序对集群读写操作,观察扩容迁移过程中读写是否出错并统计出错的key数量。
5.1 模拟业务程序
逻辑:在迁移slot过程中,对集群操作,变更key对应的值,然后立刻获取key的值,若得到的值与我们设置的值相同,则数据一致,未出错;若不同(即set没有写入或get错误结果)则统计为错误。
该程序暂不运行,待到开始迁移slot后,运行该程序。
#!/usr/bin/env python # -*- coding: utf-8 -*- from rediscluster import StrictRedisCluster # Requires at least one node for cluster discovery. Multiple nodes is recommended. startup_nodes = [{"host": "192.168.11.3", "port": "6380"}, {"host": "192.168.11.3", "port": "6381"}, {"host": "192.168.11.3", "port": "6382"}, {"host": "192.168.11.3", "port": "6383"}, {"host": "192.168.11.3", "port": "6384"}, {"host": "192.168.11.3", "port": "6385"}] rc = StrictRedisCluster(startup_nodes=startup_nodes, decode_responses=True) count = 0 for i in xrange(100000): key = "key_%s" % i value = "_value_%s" % i rc.set(key, value) result = rc.get(key) if result == value: pass else: count +=1 print count
5.2 迁移slot
为新增的节点分配slot,也就是把防止在其他节点上的slot迁移到这个节点上,网友的一句话形容的很形象:就像玩扑克抽牌一样:slot从A迁移到B,就好像从A抽一张牌,从其他所有节点迁移N个slot,类似重新洗牌发牌一样。
[root@salt-master bin]# redis-trib.rb reshard 192.168.11.3:6380 # 分配slot >>> Performing Cluster Check (using node 192.168.11.3:6380) M: ffb86e1168215b138c5b7a81ad92e44ca7095b54 192.168.11.3:6380 slots:2683-5460 (2778 slots) master 1 additional replica(s) S: 8617133eb1d2ef07f87dd6b108a4a0ec53ccdf99 192.168.11.3:6391 slots: (0 slots) slave replicates cdbcbd49b78684188fe321eec90e625ed394e0b7 (省略部分输出......) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. How many slots do you want to move (from 1 to 16384)? 6384 # 输入要迁移的slot的数量 What is the receiving node ID? cdbcbd49b78684188fe321eec90e625ed394e0b7 Please enter all the source node IDs. Type ‘all‘ to use all the nodes as source nodes for the hash slots. Type ‘done‘ once you entered all the source nodes IDs. Source node #1:all # 输入slot来源节点,all表示所有其它节点 Moving slot 6260 from 301b60cdb455b9ae27b7b562524c0d039e640815 Moving slot 6261 from 301b60cdb455b9ae27b7b562524c0d039e640815 Moving slot 6262 from 301b60cdb455b9ae27b7b562524c0d039e640815 Moving slot 6263 from 301b60cdb455b9ae27b7b562524c0d039e640815 Do you want to proceed with the proposed reshard plan (yes/no)? yes # 最终确认 Moving slot 0 from 192.168.11.3:6388 to 192.168.11.3:6390: ..... .......
迁移的slot的数量可以根据节点配置不同而不同,若各节点配置相同,则可以平均分配slot(n=16384/主节点数量)
另外,在承载业务的集群上面进行迁移时,数据量越大,迁移时间越长。
6 本次探究结论
redis-cluster扩容(slot迁移)不影响数据的读写。
本文出自 “linux飞龙在天” 博客,转载请与作者联系!
redis-cluster集群扩容以及扩容client读写数据影响的探究