实验环境
主机/IP | 角色 |
server01/192.168.1.201 | 主 |
server02/192.168.1.202 | 主 |
server03/192.168.1.203 | 主 |
server04/192.168.1.204 | 从 |
server05/192.168.1.205 | 从 |
server06/192.168.1.206 | 从 |
server07/192.168.1.207 | 集群建立后新添加节点 |
all nodes
修改配置文件
vim /etc/redis/redis.conf
# 启用集群模式
cluster-enabled yes
# 集群配置文件
cluster-config-file nodes.conf
# 节点通信超时时间
cluster-node-timeout 5000
# 启用aof持久化
appendonly yes
systemctl restart redis
server01
初始化集群
redis-cli --cluster create \
192.168.1.201:6379 \
192.168.1.202:6379 \
192.168.1.203:6379 \
192.168.1.204:6379 \
192.168.1.205:6379 \
192.168.1.206:6379 \
--cluster-replicas 1
cluster-replicas 1表示为每个主节点分配一个副本
查看集群节点
redis-cli cluster nodes
测试
redis-cli -c
-c参数表示启用集群模式,会自动处理节点之间的重定向
set hello world
set foo bar
get hello
get foo
读写数据的时候可以在输出信息中看出数据存放的节点
当将某个主节点停止服务的时候,从节点会自动升级为主节点,主节点修复故障重新恢复后会以从节点的方式自动重新加入集群
添加集群新节点
修改配置文件
vim /etc/redis/redis.conf
# 启用集群模式
cluster-enabled yes
# 集群配置文件
cluster-config-file nodes.conf
# 节点通信超时时间
cluster-node-timeout 5000
# 启用aof持久化
appendonly yes
systemctl restart redis
redis-cli --cluster add-node 新节点IP:新节点服务端口 集群节点IP:集群节点服务端口
重新从集群中分片分配哈希槽给该节点
redis-cli --cluster reshard 192.168.1.201:6379
填写槽分配数(1000)、207节点的ID、槽来源节点ID(all)
如果此节点是作为从节点使用的,则:
redis-cli --cluster add-node 新节点主机:端口 现有节点主机:端口 --cluster-slave --cluster-master-id 主节点ID
查看集群信息
redis-cli --cluster check 192.168.1.207:6379
192.168.1.203:6379 (905d404c...) -> 1 keys | 4660 slots | 1 slaves.
192.168.1.202:6379 (2effc715...) -> 0 keys | 4659 slots | 1 slaves.
192.168.1.207:6379 (38742141...) -> 0 keys | 999 slots | 0 slaves.
192.168.1.205:6379 (0afa0969...) -> 1 keys | 6066 slots | 1 slaves.
[OK] 2 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.1.201:6379)
S: 6a29ebd36e73dfb988e68e9e28a0c7978940970c 192.168.1.201:6379
slots: (0 slots) slave
replicates 0afa09695a030192573c3ff3060e9c730637c3aa
M: 905d404cb2d9cfd7210193955907a314e3449dd5 192.168.1.203:6379
slots:[11724-16383] (4660 slots) master
1 additional replica(s)
M: 2effc715175b2545de212f38043ecb9dcbb77749 192.168.1.202:6379
slots:[6264-10922] (4659 slots) master
1 additional replica(s)
S: 42f405341bee314f25d653fc51ff5563a166513a 192.168.1.204:6379
slots: (0 slots) slave
replicates 905d404cb2d9cfd7210193955907a314e3449dd5
S: 75ae3a209cef3e225ff4f023bdb685e9c5f0543b 192.168.1.206:6379
slots: (0 slots) slave
replicates 2effc715175b2545de212f38043ecb9dcbb77749
M: 38742141744bc269ea6deda89b39d555b10eb60d 192.168.1.207:6379
slots:[0-394],[5962-6263],[11422-11723] (999 slots) master
M: 0afa09695a030192573c3ff3060e9c730637c3aa 192.168.1.205:6379
slots:[395-5961],[10923-11421] (6066 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
删除节点
# 清空数据
redis-cli -h 192.168.1.207 -p 6379 flushall
# 清空槽
# 选择迁移207的99个槽,然后填写目标位其他节点的ID,然后输入done,最后填写来源槽ID为207的ID
redis-cli --cluster reshard 192.168.1.201:6379
redis-cli --cluster del-node 192.168.1.207:6379 38742141744bc269ea6deda89b39d555b10eb60d