Redis主从复制的实现

前言
最近用到redis,所以就学习了下redis的相关东西,从数据类型、主从原理、持久化方式等方面着手看了不少资料,也进行了一些实践操作。redis的配置都比较简单,网络上相关资料比较多,把实践的过程记录下来以备查阅。

系统环境

1
2
hadoop-master	192.168.186.128   #master节点
hadoop-slave 192.168.186.129 #slave节点
1
2
3
[root@hadoop-slave ~]# cat /etc/issue
CentOS release 6.4 (Final)
Kernel \r on an \m

主从复制原理

1
2
3
4
5
6
7
8
9
10
11
12
13
1.Slave启动后,无论是第一次连接还是重连到Master,它都会主动发出一个SYNC命令
2.当Master收到SYNC命令之后,将会执行BGSAVE(后台存盘进程),即在后台保存数据到磁盘(rdb快照文件),同时收集所有新收到的写入和修改数据集的命令存入缓冲区(非查询类)
3.Master在后台把数据保存到快照文件完成后,会传送整个数据库文件到Slave
4.Slave接收到数据库文件后,会把内存清空,然后加载该文件到内存中以完成一次完全同步
5.然后Master会把之前收集到缓冲区中的命令和新的修改命令依次传送给Slave
6.Slave接受到之后在本地执行这些数据修改命令,从而达到最终的数据同步
7.之后Master与Slave之间将会不断的通过异步方式进行命令的同步,从而保证数据的时时同步
8.如果Master和Slave之间的链接出现断连,Slave可以自动重连Master。

根据版本的不同,断连后同步的方式也不同:

2.8之前:重连成功之后,一次全量同步操作将被自动执行
2.8之后:重连成功之后,进行部分同步操作

Redis安装

master和slave节点相同配置。

1
2
3
4
5
[root@hadoop-slave ~]# wget https://github.com/antirez/redis/archive/2.8.20.tar.gz
[root@hadoop-slave ~]# tar -zxf 2.8.20
[root@hadoop-slave ~]# mv redis-2.8.20/ /usr/local/src/
[root@hadoop-slave src]# cd redis-2.8.20/
[root@hadoop-slave src]# make

执行完后,会在当前目录中的src目录中生成相应的执行文件,如:redis-server redis-cli等;
我们在/usr/local/目录中创建redis位置目录和相应的数据存储目录、配置文件目录等.

1
2
3
4
5
6
[root@hadoop-slave local]# mkdir /usr/local/redis/{conf,run,db} -pv
[root@hadoop-slave local]# cd /usr/local/src/redis-2.8.20/
[root@hadoop-slave redis-2.8.20]# cp redis.conf /usr/local/redis/conf/
[root@hadoop-slave redis-2.8.20]# cd src/
[root@hadoop-slave src]# cp redis-benchmark redis-check-aof redis-check-dump redis-cli redis-server mkreleasehdr.sh /usr/local/redis/
`

到此Redis安装完成了。
下面来试着启动一下,并查看相应的端口是否已经启动:

1
2
3
4
[root@hadoop-slave src]# /usr/local/redis/redis-server /usr/local/redis/conf/redis.conf   & #可以打入后台
[root@hadoop-slave redis]# netstat -antulp | grep 6379
tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN 72669/redis-server
tcp 0 0 :::6379 :::* LISTEN 72669/redis-server

启动没问题了,ok!

Redis配置

本次试验中hadoop-master作为master节点,hadoop-slave为slave节点。

只需要修改slave节点的redis.conf文件添加salveof

1
2
3
[root@hadoop-slave conf]# vi redis.conf 
# slaveof <masterip> <masterport>
slaveof 192.168.186.128 6379

启动slave节点的redis

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@hadoop-slave conf]# /usr/local/redis/redis-server /usr/local/redis/conf/redis.conf &
……
[3097] 25 Aug 02:40:31.481 * The server is now ready to accept connections on port 6379
[3097] 25 Aug 02:40:32.482 * Connecting to MASTER 192.168.186.128:6379
[3097] 25 Aug 02:40:32.483 * MASTER <-> SLAVE sync started
[3097] 25 Aug 02:40:32.484 * Non blocking connect for SYNC fired the event.
[3097] 25 Aug 02:40:32.485 * Master replied to PING, replication can continue...
[3097] 25 Aug 02:40:32.487 * Partial resynchronization not possible (no cached master)
[3097] 25 Aug 02:40:32.488 * Full resync from master: 3a10ba424548ecdbc5e9df756e5cfbabf36de7d3:1
[3097] 25 Aug 02:40:32.588 * MASTER <-> SLAVE sync: receiving 18 bytes from master
[3097] 25 Aug 02:40:32.588 * MASTER <-> SLAVE sync: Flushing old data
[3097] 25 Aug 02:40:32.588 * MASTER <-> SLAVE sync: Loading DB in memory
[3097] 25 Aug 02:40:32.588 * MASTER <-> SLAVE sync: Finished with success

有以上提示表明sync已经开始
同时在master节点可以发现以下提示

1
2
3
4
5
6
7
8
[root@hadoop-master redis]# [5288] 25 Aug 02:40:32.094 * Slave 192.168.186.129:6379 asks for synchronization
[5288] 25 Aug 02:40:32.094 * Full resync requested by slave 192.168.186.129:6379
[5288] 25 Aug 02:40:32.094 * Starting BGSAVE for SYNC with target: disk
[5288] 25 Aug 02:40:32.113 * Background saving started by pid 5320
[5320] 25 Aug 02:40:32.138 * DB saved on disk
[5320] 25 Aug 02:40:32.138 * RDB: 6 MB of memory used by copy-on-write
[5288] 25 Aug 02:40:32.194 * Background saving terminated with success
[5288] 25 Aug 02:40:32.195 * Synchronization with slave 192.168.186

同步测试

master机器上

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@hadoop-master redis]# ./redis-cli 
127.0.0.1:6379> info replication
# Replication
role:master
connected_slaves:1
slave0:ip=192.168.186.129,port=6379,state=online,offset=71,lag=0
master_repl_offset:71
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:2
repl_backlog_histlen:70
127.0.0.1:6379> set name "liuyan"
OK
127.0.0.1:6379> get name
"liuyan"
127.0.0.1:6379>

slave节点上

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[root@hadoop-slave redis]# ./redis-cli 
127.0.0.1:6379> info replication
# Replication
role:slave
master_host:192.168.186.128
master_port:6379
master_link_status:up
master_last_io_seconds_ago:6
master_sync_in_progress:0
slave_repl_offset:113
slave_priority:100
slave_read_only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
127.0.0.1:6379> get name
"liuyan"
127.0.0.1:6379>

参考文献