Redis cluster 介绍
Redis在3.0版正式引入了集群特性。Redis集群是一个分布式(distributed)、容错(fault-tolerant)的 Redis内存K/V服务, 普通单机 Redis 使用的功能 仅是集群中功能的一个子集(subset)。Redis集群并不支持处理多个keys的命令,因为这需要在不同的节点间移动数据,从而达不到像Redis那样的性能,在高负载的情况下可能会导致不可预料的错误。
Redis集群的几个重要特征:
(1).Redis 集群的分片特征在于将键空间分拆了16384个槽位,每一个节点负责其中一些槽位。
(2).Redis提供一定程度的可用性,可以在某个节点宕机或者不可达的情况下继续处理命令.
(3).Redis 集群中不存在中心(central)节点或者代理(proxy)节点, 集群的其中一个主要设计目标是达到线性可扩展性(linear scalability)。
集群客户端连接集群中任一Redis Instance即可发送命令,当Redis Instance收到自己不负责的Slot的请求时,会将负责请求Key所在Slot的Redis Instance地址返回给客户端,客户端收到后自动将原请求重新发往这个地址,对外部透明。一个Key到底属于哪个Slot由crc16(key) % 16384 决定。
1. Redis的数据分片(Sharding)
Redis 集群的键空间被分割为 16384 (2^14)个槽(slot), 集群的最大节点数量也是 16384 个(推荐的最大节点数量为 1000 个),同理每个主节点可以负责处理1到16384个槽位。
当16384个槽位都有主节点负责处理时,集群进入“稳定”上线状态,可以开始处理数据命令。当集群没有处理稳定状态时,可以通过执行重配置(reconfiguration)操作,使得每个哈希槽都只由一个节点进行处理。
重配置指的是将某个/某些槽从一个节点移动到另一个节点。一个主节点可以有任意多个从节点, 这些从节点用于在主节点发生网络断线或者节点失效时, 对主节点进行替换。
集群的使用公式HASH_SLOT=CRC16(key) mod 16384 计算key属于哪个槽。CRC16其结果长度为16位。
2. Redis集群节点
Redis 集群中的节点不仅要记录键和值的映射,还需要记录集群的状态,包括键到正确节点的映射。它还具有自动发现其他节点,识别工作不正常的节点,并在有需要时,在从节点中选举出新的主节点的功能。
为了执行以上列出的任务, 集群中的每个节点都与其他节点建立起了“集群连接(cluster bus)”, 该连接是一个 TCP 连接, 使用二进制协议进行通讯。
节点之间使用 Gossip 协议 来进行以下工作:
a).传播(propagate)关于集群的信息,以此来发现新的节点
b).向其他节点发送 PING 数据包,以此来检查目标节点是否正常运作
c).在特定事件发生时,发送集群信息
除此之外, 集群连接还用于在集群中发布或订阅信息。
集群节点不能前端代理命令请求, 所以客户端应该在节点返回 -MOVED或者 -ASK转向(redirection)错误时, 自行将命令请求转发至其他节点。
客户端可以自由地向集群中的任何一个节点发送命令请求, 并可以在有需要时, 根据转向错误所提供的信息, 将命令转发至正确的节点, 所以在理论上来说, 客户端是无须保存集群状态信息的。但如果客户端可以将键和节点之间的映射信息保存起来, 可以有效地减少可能出现的转向次数, 借此提升命令执行的效率。
每个节点在集群中由一个独一无二的 ID标识, 该 ID 是一个十六进制表示的 160 位随机数,在节点第一次启动时由 /dev/urandom 生成。节点会将它的 ID 保存到配置文件, 只要这个配置文件不被删除, 节点就会一直沿用这个 ID 。一个节点可以改变它的 IP 和端口号, 而不改变节点 ID 。 集群可以自动识别出IP/端口号的变化, 并将这一信息通过 Gossip协议广播给其他节点知道。
下面是每个节点都有的关联信息, 并且节点会将这些信息发送给其他节点:
a).节点所使用的 IP 地址和 TCP 端口号
b).节点的标志(flags)
c).节点负责处理的哈希槽
b).节点最近一次使用集群连接发送 PING 数据包(packet)的时间
e).节点最近一次在回复中接收到 PONG 数据包的时间
f).集群将该节点标记为下线的时间
g).该节点的从节点数量
如果该节点是从节点的话,那么它会记录主节点的节点 ID 。
3. Redis cluster 之间的check机制
Redis Cluster采用quorum+心跳的机制。从节点的角度看,节点会定期给其他所有的节点发送Ping,cluster-node-timeout(可配置,秒级)时间内没有收到对方的回复,则单方面认为对端节点宕机,将该节点标为PFAIL状态。通过节点之间交换信息收集到quorum个节点都认为这个节点为PFAIL,则将该节点标记为FAIL,并且将其发送给其他所有节点,其他所有节点收到后立即认为该节点宕机。从这里可以看出,主宕机后,至少cluster-node-timeout时间内该主所负责的Slot的读写服务不可用
============================================================================
在了解Redis Cluster的集群基本特征后,下面搭建一个Redis Cluster集群
一、安装并配置redis
准备三台server,分别为redis01-jp/redis02-jp/redis03-jp ,在三台server上分别启动 两个redis-server进程,分别占用tcp6300/6301端口。
各server初始化配置如下:
yum -y install gcc zlib-devel jemalloc-devel ruby rubygems tcl
wget http://download.redis.io/releases/redis-stable.tar.gz
tar xfz redis-stable.tar.gz
cd redis-stable
make
cp src/redis-trib.rb /bin
cp src/redis-server /bin
cp src/redis-cli /bin
gem install redis
echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo "vm.overcommit_memory = 1" >> /etc/sysctl.conf
sysctl -p
echo 1024 > /proc/sys/net/core/somaxconn
mkdir /data/redis/{6300,6301} -p
[root@redis01-jp redis-stable]# grep -Pv "^[#]|^ *$" redis.conf >> /data/redis/redis_6300.conf
[root@redis01-jp redis-stable]# cd
[root@redis01-jp ~]# tail -5 /etc/rc.local
touch /var/lock/subsys/local
echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo 512 > /proc/sys/net/core/somaxconn
redis-server /data/redis/redis_6300.conf
redis-server /data/redis/redis_6301.conf
[root@redis01-jp ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost6 localhost6.localdomain6
172.31.24.139 redis01-jp
172.31.24.140 redis02-jp
172.31.24.141 redis03-jp
[root@redis01-jp ~]# vim /data/redis/redis_6300.conf
[root@redis01-jp ~]# cat /data/redis/redis_6300.conf
bind 0.0.0.0
protected-mode no #关闭保护模式,允许非本机访问redis
port 6300
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize yes
supervised no
pidfile /var/run/redis_6300.pid
loglevel notice
logfile "/data/redis/6300/redis_6300.log"
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir /data/redis/6300/
slave-serve-stale-data yes
slave-read-only yes #slave只读
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
appendonly yes #打开aof持久化 (必选项)
appendfilename "appendonly.aof"
appendfsync everysec #每秒一次aof写
no-appendfsync-on-rewrite yes #关闭在aof rewrite的时候对新的写操作进行fsync
auto-aof-rewrite-percentage 80-100 #部署在同一机器的redis实例,把auto-aof-rewrite搓开,防止瞬间fork所有redis进程做rewrite,占用大量内存
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000 #超过该时间的查询记录为慢查询
slowlog-max-len 128 #慢查询最多记录128条
latency-monitor-threshold 100 #内部延迟超过100ms的查询使用内部monitor记录
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes
cluster-enabled yes #打开redis集群功能
cluster-config-file /data/redis/6300/nodes-6300.conf #预设保存节点配置文件的路径, cluster启动后 所有相关配置自动写入该文件
cluster-node-timeout 15000 #节点互连超时的阀值(单位毫秒)
cluster-migration-barrier 1 #一个主节点在拥有多少个好的从节点的时候就要割让一个从节点出来给其他没有从节点或者从节点挂掉的主节点
cluster-require-full-coverage no #如果某一些key space没有被集群中任何节点覆盖,最常见的就是一个node挂掉,集群将停止接受写入
[root@redis01-jp ~]# scp /data/redis/redis_6300.conf redis02-jp:/data/redis/
redis_6300.conf 100% 1377 1.3KB/s 00:00
[root@redis01-jp ~]# scp /data/redis/redis_6300.conf redis03-jp:/data/redis/
redis_6300.conf 100% 1377 1.3KB/s 00:00
[root@redis01-jp ~]# cp /data/redis/redis_630{0,1}.conf
[root@redis01-jp ~]# vim /data/redis/redis_6301.conf (修改6301配置文件 ,端口不同,内容略)
[root@redis01-jp ~]# scp /data/redis/redis_6301.conf redis03-jp:/data/redis/
redis_6301.conf 100% 1377 1.3KB/s 00:00
[root@redis01-jp ~]# scp /data/redis/redis_6301.conf redis02-jp:/data/redis/
redis_6301.conf
[root@redis01-jp ~]# cat /data/redis/redis_6301.conf
[root@redis01-jp ~]# diff /data/redis/redis_630{0,1}.conf
3c3
< port 6300
---
> port 6301
9c9
< pidfile /var/run/redis_6300.pid
---
> pidfile /var/run/redis_6301.pid
11c11
< logfile "/data/redis/6300/redis_6300.log"
---
> logfile "/data/redis/6301/redis_6301.log"
20c20
< dir /data/redis/6300/
---
> dir /data/redis/6301/
55c55
< cluster-config-file /data/redis/6300/nodes-6300.conf
---
> cluster-config-file /data/redis/6301/nodes-6301.conf
[root@redis01-jp ~]# redis-server /data/redis/redis_6300.conf
[root@redis01-jp ~]# redis-server /data/redis/redis_6301.conf
[root@redis01-jp ~]# lsof -i:6300
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
redis-ser 5771 root 4u IPv4 13089 0t0 TCP *:bmc-grx (LISTEN)
[root@redis01-jp ~]# lsof -i:6301
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
redis-ser 5775 root 4u IPv4 13095 0t0 TCP *:bmc_ctd_ldap (LISTEN)
[root@redis01-jp ~]#分别再启动redis02-jp、redis03-jp上的 6300、6301 redis-server 进程
二、create创建集群
create命令可选replicas参数,replicas表示需要有几个slave。最简单命令使用如下:
$ ruby redis-trib.rb create 10.1.1.1:6379 10.1.1.2:6379 10.1.1.3:6379
指定一个slave的命令如下:
$ ruby redis-trib.rb create --replicas 1 10.1.1.1:6379 10.1.1.2:6379 10.1.1.3:6379 10.1.1.4:6379 10.1.1.5:6379 10.1.1.6:6379
注: 此命令对域名或主机名的支持较差,所以在该命令后 输入IP地址:端口 的方式,否则会出现 如下提示:...`call': ERR Invalid node address specified: pc1:6300 (Redis::CommandError)
创建流程如下:
1、首先为每个节点创建ClusterNode对象,包括连接每个节点。检查每个节点是否为独立且db为空的节点。执行load_info方法导>入节点信息。
2、检查传入的master节点数量是否大于等于3个。只有大于3个节点才能组成集群。
3、计算每个master需要分配的slot数量,以及给master分配slave。分配的算法大致如下:
先把节点按照host分类,这样保证master节点能分配到更多的主机中。
不停遍历遍历host列表,从每个host列表中弹出一个节点,放入interleaved数组。直到所有的节点都弹出为止。
master节点列表就是interleaved前面的master数量的节点列表。保存在masters数组。
计算每个master节点负责的slot数量,保存在slots_per_node对象,用slot总数除以master数量取整即可。
遍历masters数组,每个master分配slots_per_node个slot,最后一个master,分配到16384个slot为止。
接下来为master分配slave,分配算法会尽量保证master和slave节点不在同一台主机上。对于分配完指定slave数量的节点,>还有多余的节点,也会为这些节点寻找master。分配算法会遍历两次masters数组。
第一次遍历masters数组,在余下的节点列表找到replicas数量个slave。每个slave为第一个和master节点host不一样的节点>,如果没有不一样的节点,则直接取出余下列表的第一个节点。
第二次遍历是在对于节点数除以replicas不为整数,则会多余一部分节点。遍历的方式跟第一次一样,只是第一次会一次性给
master分配replicas数量个slave,而第二次遍历只分配一个,直到余下的节点被全部分配出去
4、打印出分配信息,并提示用户输入“yes”确认是否按照打印出来的分配方式创建集群。
5、输入“yes”后,会执行flush_nodes_config操作,该操作执行前面的分配结果,给master分配slot,让slave复制master,对于>还没有握手(cluster meet)的节点,slave复制操作无法完成,不过没关系,flush_nodes_config操作出现异常会很快返回,后>续握手后会再次执行flush_nodes_config。
6、给每个节点分配epoch,遍历节点,每个节点分配的epoch比之前节点大1。
7、节点间开始相互握手,握手的方式为节点列表的其他节点跟第一个节点握手。
8、然后每隔1秒检查一次各个节点是否已经消息同步完成,使用ClusterNode的get_config_signature方法,检查的算法为获取每>个节点cluster nodes信息,排序每个节点,组装成node_id1:slots|node_id2:slot2|...的字符串。如果每个节点获得字符串都相
同,即认为握手成功。
9、此后会再执行一次flush_nodes_config,这次主要是为了完成slave复制操作
10、最后再执行check_cluster,全面检查一次集群状态。包括和前面握手时检查一样的方式再检查一遍。确认没有迁移的节点。>确认所有的slot都被分配出去了
11、至此完成了整个创建流程,返回[OK] All 16384 slots covered
实例:
1 创建cluster,添加三台master
[root@redis01-jp ~]# redis-trib.rb create 172.31.24.139:6300 172.31.24.140:6300 172.31.24.141:6300
>>> Creating cluster
>>> Performing hash slots allocation on 3 nodes...
Using 3 masters:
172.31.24.139:6300
172.31.24.140:6300
172.31.24.141:6300
M: 42050de2234507bf2e8d930f8d6b0813f432f321 172.31.24.139:6300
slots:0-5460 (5461 slots) master
M: b233ca19c537ae80bdbde10e62ca231d74b00e8e 172.31.24.140:6300
slots:5461-10922 (5462 slots) master
M: 9681bbeb1eccdccc3ee132e33295f7b8b3bd230c 172.31.24.141:6300
slots:10923-16383 (5461 slots) master
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join.
>>> Performing Cluster Check (using node 172.31.24.139:6300)
M: 42050de2234507bf2e8d930f8d6b0813f432f321 172.31.24.139:6300
slots:0-5460 (5461 slots) master
0 additional replica(s)
M: b233ca19c537ae80bdbde10e62ca231d74b00e8e 172.31.24.140:6300
slots:5461-10922 (5462 slots) master
0 additional replica(s)
M: 9681bbeb1eccdccc3ee132e33295f7b8b3bd230c 172.31.24.141:6300
slots:10923-16383 (5461 slots) master
0 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
2 检查cluster的状态
[root@redis01-jp ~]# redis-trib.rb check 172.31.24.139:6300
>>> Performing Cluster Check (using node 172.31.24.139:6300)
M: 42050de2234507bf2e8d930f8d6b0813f432f321 172.31.24.139:6300
slots:0-5460 (5461 slots) master
0 additional replica(s)
M: b233ca19c537ae80bdbde10e62ca231d74b00e8e 172.31.24.140:6300
slots:5461-10922 (5462 slots) master
0 additional replica(s)
M: 9681bbeb1eccdccc3ee132e33295f7b8b3bd230c 172.31.24.141:6300
slots:10923-16383 (5461 slots) master
0 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
3 为每个master各添加一台slave
[root@redis01-jp ~]# redis-trib.rb add-node --slave --master-id b233ca19c537ae80bdbde10e62ca231d74b00e8e 172.31.24.139:6301 172.31.24.140:6300
>>> Adding node 172.31.24.139:6301 to cluster 172.31.24.140:6300
>>> Performing Cluster Check (using node 172.31.24.140:6300)
M: b233ca19c537ae80bdbde10e62ca231d74b00e8e 172.31.24.140:6300
slots:5461-10922 (5462 slots) master
0 additional replica(s)
M: 9681bbeb1eccdccc3ee132e33295f7b8b3bd230c 172.31.24.141:6300
slots:10923-16383 (5461 slots) master
0 additional replica(s)
M: 42050de2234507bf2e8d930f8d6b0813f432f321 172.31.24.139:6300
slots:0-5460 (5461 slots) master
0 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 172.31.24.139:6301 to make it join the cluster.
Waiting for the cluster to join.
>>> Configure node as replica of 172.31.24.140:6300.
[OK] New node added correctly.
[root@redis01-jp ~]# redis-trib.rb add-node --slave --master-id 9681bbeb1eccdccc3ee132e33295f7b8b3bd230c 172.31.24.140:6301 172.31.24.141:6300
>>> Adding node 172.31.24.140:6301 to cluster 172.31.24.141:6300
>>> Performing Cluster Check (using node 172.31.24.141:6300)
M: 9681bbeb1eccdccc3ee132e33295f7b8b3bd230c 172.31.24.141:6300
slots:10923-16383 (5461 slots) master
0 additional replica(s)
M: b233ca19c537ae80bdbde10e62ca231d74b00e8e 172.31.24.140:6300
slots:5461-10922 (5462 slots) master
1 additional replica(s)
M: 42050de2234507bf2e8d930f8d6b0813f432f321 172.31.24.139:6300
slots:0-5460 (5461 slots) master
0 additional replica(s)
S: 17a155fadb540d0d4fe76365e16f767d07b6adc2 172.31.24.139:6301
slots: (0 slots) slave
replicates b233ca19c537ae80bdbde10e62ca231d74b00e8e
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 172.31.24.140:6301 to make it join the cluster.
Waiting for the cluster to join.
>>> Configure node as replica of 172.31.24.141:6300.
[OK] New node added correctly.
[root@redis01-jp ~]# redis-trib.rb add-node --slave --master-id 42050de2234507bf2e8d930f8d6b0813f432f321 172.31.24.141:6301 172.31.24.139:6300
>>> Adding node 172.31.24.141:6301 to cluster 172.31.24.139:6300
>>> Performing Cluster Check (using node 172.31.24.139:6300)
M: 42050de2234507bf2e8d930f8d6b0813f432f321 172.31.24.139:6300
slots:0-5460 (5461 slots) master
0 additional replica(s)
M: b233ca19c537ae80bdbde10e62ca231d74b00e8e 172.31.24.140:6300
slots:5461-10922 (5462 slots) master
1 additional replica(s)
S: c89dee58171151173e54f5a6442c885a927debca 172.31.24.140:6301
slots: (0 slots) slave
replicates 9681bbeb1eccdccc3ee132e33295f7b8b3bd230c
S: 17a155fadb540d0d4fe76365e16f767d07b6adc2 172.31.24.139:6301
slots: (0 slots) slave
replicates b233ca19c537ae80bdbde10e62ca231d74b00e8e
M: 9681bbeb1eccdccc3ee132e33295f7b8b3bd230c 172.31.24.141:6300
slots:10923-16383 (5461 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 172.31.24.141:6301 to make it join the cluster.
Waiting for the cluster to join.
>>> Configure node as replica of 172.31.24.139:6300.
[OK] New node added correctly.
[root@redis01-jp ~]# redis-trib.rb check 172.31.24.139:6300
>>> Performing Cluster Check (using node 172.31.24.139:6300)
M: 42050de2234507bf2e8d930f8d6b0813f432f321 172.31.24.139:6300
slots:0-5460 (5461 slots) master
1 additional replica(s)
M: b233ca19c537ae80bdbde10e62ca231d74b00e8e 172.31.24.140:6300
slots:5461-10922 (5462 slots) master
1 additional replica(s)
S: c89dee58171151173e54f5a6442c885a927debca 172.31.24.140:6301
slots: (0 slots) slave
replicates 9681bbeb1eccdccc3ee132e33295f7b8b3bd230c
S: 17a155fadb540d0d4fe76365e16f767d07b6adc2 172.31.24.139:6301
slots: (0 slots) slave
replicates b233ca19c537ae80bdbde10e62ca231d74b00e8e
S: 8750f7512a1a757b1b9e2a931137b135a7ebdc8b 172.31.24.141:6301
slots: (0 slots) slave
replicates 42050de2234507bf2e8d930f8d6b0813f432f321
M: 9681bbeb1eccdccc3ee132e33295f7b8b3bd230c 172.31.24.141:6300
slots:10923-16383 (5461 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
[root@redis01-jp ~]# redis-trib.rb info 172.31.24.139:6300
172.31.24.139:6300 (42050de2...) -> 0 keys | 5461 slots | 1 slaves.
172.31.24.140:6300 (b233ca19...) -> 0 keys | 5462 slots | 1 slaves.
172.31.24.141:6300 (9681bbeb...) -> 0 keys | 5461 slots | 1 slaves.
[OK] 0 keys in 3 masters.
0.00 keys per slot on average.
Redis Cluster中的节点分为两种:Master节点和Slave节点,一个Master节点可以拥有若干个Slave节点,Master节点上的数据通过异步方式与Slave节点实现数据同步,当Master节点因为某种原因退出集群后,Redis Cluster会自动从该Master节点的Slave节点中选择出一个作为新的Master节点。因此,redis-trib.rb工具的create子命令提供了--args参数来指定集群中的Master节点拥有几个Slave节点,譬如使用6个redis实例构建集群且--args参数值为1,那么整个集群就包含三个Master节点和三个Slave节点,每个Master节点都有一个Slave节点
四、客户端连接操作
[root@redis01-jp ~]# redis-cli -h redis02-jp -p 6300
redis02-jp:6300> CLUSTER INFO
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:3
cluster_my_epoch:2
cluster_stats_messages_sent:625
cluster_stats_messages_received:625
redis02-jp:6300> CLUSTER NODES
b233ca19c537ae80bdbde10e62ca231d74b00e8e 172.31.24.140:6300 myself,master - 0 0 2 connected 5461-10922
9681bbeb1eccdccc3ee132e33295f7b8b3bd230c 172.31.24.141:6300 master - 0 1481875703148 3 connected 10923-16383
8750f7512a1a757b1b9e2a931137b135a7ebdc8b 172.31.24.141:6301 slave 42050de2234507bf2e8d930f8d6b0813f432f321 0 1481875704150 1 connected
c89dee58171151173e54f5a6442c885a927debca 172.31.24.140:6301 slave 9681bbeb1eccdccc3ee132e33295f7b8b3bd230c 0 1481875700643 3 connected
42050de2234507bf2e8d930f8d6b0813f432f321 172.31.24.139:6300 master - 0 1481875701144 1 connected 0-5460
17a155fadb540d0d4fe76365e16f767d07b6adc2 172.31.24.139:6301 slave b233ca19c537ae80bdbde10e62ca231d74b00e8e 0 1481875702146 2 connected
redis02-jp:6300> set name "Thompson"
OK
redis02-jp:6300> exit
[root@redis01-jp ~]#
[root@redis01-jp ~]# redis-cli -h redis01-jp -p 6300
redis01-jp:6300> set age 42
OK
redis01-jp:6300> get age
"42"
[root@redis01-jp ~]# redis-cli -c -h redis01-jp -p 6300
redis01-jp:6300> get name
-> Redirected to slot [5798] located at 172.31.24.140:6300
"Thompson"
172.31.24.140:6300> get age
-> Redirected to slot [741] located at 172.31.24.139:6300
"42"
172.31.24.139:6300>
在启动的时添加一个-c参数,从中可以看到当读写Key-Value数据时会显示数据所属的哈希槽及其存储的节点。
五、添加新的节点
[root@redis01-jp ~]# mkdir /data/redis/6302
[root@redis01-jp ~]# cp /data/redis/redis_630{0,2}.conf
[root@redis01-jp ~]# vim /data/redis/redis_6302.conf
[root@redis01-jp ~]# redis-server /data/redis/redis_6302.conf
[root@redis01-jp ~]# lsof -i:6302
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
redis-ser 5834 root 4u IPv4 13565 0t0 TCP *:6302 (LISTEN)
[root@redis01-jp ~]# scp /data/redis/redis_6302.conf redis02-jp:/data/redis/redis_6302.conf
redis_6302.conf 100% 1397 1.4KB/s 00:00
[root@redis01-jp ~]# ssh redis02-jp 'mkdir /data/redis/6302'
[root@redis01-jp ~]# ssh redis02-jp 'redis-server /data/redis/redis_6302.conf'
[root@redis01-jp ~]# redis-trib.rb add-node 172.31.24.139:6302 172.31.24.139:6300
>>> Adding node 172.31.24.139:6302 to cluster 172.31.24.139:6300
>>> Performing Cluster Check (using node 172.31.24.139:6300)
M: 42050de2234507bf2e8d930f8d6b0813f432f321 172.31.24.139:6300
slots:0-5460 (5461 slots) master
1 additional replica(s)
M: b233ca19c537ae80bdbde10e62ca231d74b00e8e 172.31.24.140:6300
slots:5461-10922 (5462 slots) master
1 additional replica(s)
S: c89dee58171151173e54f5a6442c885a927debca 172.31.24.140:6301
slots: (0 slots) slave
replicates 9681bbeb1eccdccc3ee132e33295f7b8b3bd230c
S: 17a155fadb540d0d4fe76365e16f767d07b6adc2 172.31.24.139:6301
slots: (0 slots) slave
replicates b233ca19c537ae80bdbde10e62ca231d74b00e8e
S: 8750f7512a1a757b1b9e2a931137b135a7ebdc8b 172.31.24.141:6301
slots: (0 slots) slave
replicates 42050de2234507bf2e8d930f8d6b0813f432f321
M: 9681bbeb1eccdccc3ee132e33295f7b8b3bd230c 172.31.24.141:6300
slots:10923-16383 (5461 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 172.31.24.139:6302 to make it join the cluster.
[OK] New node added correctly.
[root@redis01-jp ~]# redis-trib.rb add-node --slave --master-id 0d49e2f3440ecb0bcc079e3a632b148df049b32b 172.31.24.140:6302 172.31.24.139:6302
>>> Adding node 172.31.24.140:6302 to cluster 172.31.24.139:6302
>>> Performing Cluster Check (using node 172.31.24.139:6302)
M: 0d49e2f3440ecb0bcc079e3a632b148df049b32b 172.31.24.139:6302
slots: (0 slots) master
0 additional replica(s)
S: 17a155fadb540d0d4fe76365e16f767d07b6adc2 172.31.24.139:6301
slots: (0 slots) slave
replicates b233ca19c537ae80bdbde10e62ca231d74b00e8e
M: 9681bbeb1eccdccc3ee132e33295f7b8b3bd230c 172.31.24.141:6300
slots:10923-16383 (5461 slots) master
1 additional replica(s)
S: 8750f7512a1a757b1b9e2a931137b135a7ebdc8b 172.31.24.141:6301
slots: (0 slots) slave
replicates 42050de2234507bf2e8d930f8d6b0813f432f321
M: b233ca19c537ae80bdbde10e62ca231d74b00e8e 172.31.24.140:6300
slots:5461-10922 (5462 slots) master
1 additional replica(s)
M: 42050de2234507bf2e8d930f8d6b0813f432f321 172.31.24.139:6300
slots:0-5460 (5461 slots) master
1 additional replica(s)
S: c89dee58171151173e54f5a6442c885a927debca 172.31.24.140:6301
slots: (0 slots) slave
replicates 9681bbeb1eccdccc3ee132e33295f7b8b3bd230c
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 172.31.24.140:6302 to make it join the cluster.
Waiting for the cluster to join.
>>> Configure node as replica of 172.31.24.139:6302.
[OK] New node added correctly.
[root@redis01-jp ~]# redis-trib.rb info 172.31.24.139:6300
172.31.24.139:6300 (42050de2...) -> 1 keys | 5461 slots | 1 slaves.
172.31.24.140:6300 (b233ca19...) -> 1 keys | 5462 slots | 1 slaves.
172.31.24.139:6302 (0d49e2f3...) -> 0 keys | 0 slots | 1 slaves.
172.31.24.141:6300 (9681bbeb...) -> 0 keys | 5461 slots | 1 slaves.
[OK] 2 keys in 4 masters.
0.00 keys per slot on average.
[root@redis01-jp ~]# redis-trib.rb reshard 172.31.24.139:6300 手工平衡slots (迁移)
......
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 3000
What is the receiving node ID? 0d49e2f3440ecb0bcc079e3a632b148df049b32b
Please enter all the source node IDs.
Type 'all' to use all the nodes as source nodes for the hash slots.
Type 'done' once you entered all the source nodes IDs.
Source node #1: all
......
[root@redis01-jp ~]# redis-trib.rb info 172.31.24.139:6300
172.31.24.139:6300 (42050de2...) -> 0 keys | 4462 slots | 1 slaves.
172.31.24.140:6300 (b233ca19...) -> 0 keys | 4461 slots | 1 slaves.
172.31.24.139:6302 (0d49e2f3...) -> 2 keys | 2999 slots | 1 slaves.
172.31.24.141:6300 (9681bbeb...) -> 0 keys | 4462 slots | 1 slaves.
[OK] 2 keys in 4 masters.
0.00 keys per slot on average.
[root@redis01-jp ~]#
六、节点下线
1) 需要先将“要下线的master” 中的slot迁移到其它节点
[root@redis01-jp ~]# redis-trib.rb reshard 172.31.24.139:6302
2) 然后查看“要下线的master” 中slot个数已经为0
[root@redis01-jp ~]# redis-trib.rb info 172.31.24.139:6300
172.31.24.139:6300 (42050de2...) -> 2 keys | 7461 slots | 2 slaves.
172.31.24.140:6300 (b233ca19...) -> 0 keys | 4461 slots | 1 slaves.
172.31.24.139:6302 (0d49e2f3...) -> 0 keys | 0 slots | 0 slaves.
172.31.24.141:6300 (9681bbeb...) -> 0 keys | 4462 slots | 1 slaves.
[OK] 2 keys in 4 masters.
0.00 keys per slot on average.
3) 然后删除“要下线master”的slave服务器,如下操作:
[root@redis01-jp ~]# redis-trib.rb del-node 172.31.24.139:6300 d812a49c622618b6687909df97748b6097f754b4
>>> Removing node d812a49c622618b6687909df97748b6097f754b4 from cluster 172.31.24.139:6300
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.
4) 最后再将指定的master 下线,如下操作:
[root@redis01-jp ~]# redis-trib.rb del-node 172.31.24.139:6300 0d49e2f3440ecb0bcc079e3a632b148df049b32b
>>> Removing node 0d49e2f3440ecb0bcc079e3a632b148df049b32b from cluster 172.31.24.139:6300
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.
5) 再次检查集群中各节点的id号
[root@redis01-jp ~]# redis-trib.rb check 172.31.24.139:6300
>>> Performing Cluster Check (using node 172.31.24.139:6300)
M: 42050de2234507bf2e8d930f8d6b0813f432f321 172.31.24.139:6300
slots:0-6461,10923-11921 (7461 slots) master
1 additional replica(s)
M: b233ca19c537ae80bdbde10e62ca231d74b00e8e 172.31.24.140:6300
slots:6462-10922 (4461 slots) master
1 additional replica(s)
S: c89dee58171151173e54f5a6442c885a927debca 172.31.24.140:6301
slots: (0 slots) slave
replicates 9681bbeb1eccdccc3ee132e33295f7b8b3bd230c
S: 17a155fadb540d0d4fe76365e16f767d07b6adc2 172.31.24.139:6301
slots: (0 slots) slave
replicates b233ca19c537ae80bdbde10e62ca231d74b00e8e
S: 8750f7512a1a757b1b9e2a931137b135a7ebdc8b 172.31.24.141:6301
slots: (0 slots) slave
replicates 42050de2234507bf2e8d930f8d6b0813f432f321
M: 9681bbeb1eccdccc3ee132e33295f7b8b3bd230c 172.31.24.141:6300
slots:11922-16383 (4462 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
[root@redis01-jp ~]#