Master-slave mode
effect
Independent write separation ensures no performance degradation in high concurrency scenarios
Graphic master-slave mode
As shown in the following figure, except for the master, other redis instances cannot be written, but all redis instances can be read.
Deployment steps
Create a conf file and configure it
6379.conf
bind 127.0.0.1 port 6379 logfile "6379.log" dbfilename "dump-6379.rdb"
6380.conf
bind 127.0.0.1 port 6380 logfile "6380.log" dbfilename "dump-6380.rdb" slaveof 127.0.0.1 6379
6381.conf
bind 127.0.0.1 port 6381 logfile "6381.log" dbfilename "dump-6381.rdb" slaveof 127.0.0.1 6379
start-up
/usr/local/bin/redis-server 6379.conf & /usr/local/bin/redis-server 6380.conf & /usr/local/bin/redis-server 6381.conf &
READONLY You can't write against a read only replica
set aa "1" (error) READONLY You can't write against a read only replica.
Write to the master and check whether the slave will synchronize the data
master write
set aa "123"
slave view
get aa "123"
sentinel
effect
Master-slave replication ensures that the performance will not decline under high concurrent access, but the master may be down at any time. Once down, write operations cannot be performed, and the two slaves are useless. Therefore, we need to add sentinels on the basis of master-slave replication to ensure that when the master is down, we can elect a dedicated master from the slave
graphic
As shown in the figure below, the author uses three sentinels in this example. As long as more than half of the Sentinels think the master is dead, a new master will be elected from the slave. Even if the original master is resurrected, it can only join the cluster as a slave.
Deployment steps
Create sentinel-26379.conf, sentinel-26380.conf and sentinel-26381.conf
sentinel-26379.conf
port 26379 daemonize yes logfile "26379.log" dir /opt/soft/redis/data sentinel monitor mymaster 127.0.0.1 6379 2 sentinel down-after-milliseconds mymaster 30000 sentinel parallel-syncs mymaster 1 sentinel failover-timeout mymaster 180000 sentinel myid mm55d2d712b1f3f312b637f9b546f00cdcedc787
sentinel-26380.conf
port 26380 daemonize yes logfile "26380.log" dir /opt/soft/redis/data sentinel monitor mymaster 127.0.0.1 6379 2 sentinel down-after-milliseconds mymaster 30000 sentinel parallel-syncs mymaster 1 sentinel failover-timeout mymaster 180000 sentinel myid mm55d2d712b1f3f312b637f9b546f00cdcedc788
sentinel-26381.conf
port 26381 daemonize yes logfile "26381.log" dir /opt/soft/redis/data sentinel monitor mymaster 127.0.0.1 6379 2 sentinel down-after-milliseconds mymaster 30000 sentinel parallel-syncs mymaster 1 sentinel failover-timeout mymaster 180000 sentinel myid mm55d2d712b1f3f312b637f9b546f00cdcedc789
start-up
/usr/local/bin/redis-sentinel sentinel-26379.conf /usr/local/bin/redis-sentinel sentinel-26380.conf /usr/local/bin/redis-sentinel sentinel-26381.conf
Check whether the sentry is set up successfully
As shown below, check the sentinel status after connecting to the client
/usr/local/bin/redis-cli -p 26379 127.0.0.1:26379> info sentinel
If sentinels=3, the sentry deployment is successful
# Sentinel sentinel_masters:1 sentinel_tilt:0 sentinel_running_scripts:0 sentinel_scripts_queue_length:0 sentinel_simulate_failure_flags:0 master0:name=mymaster,status=odown,address=127.0.0.1:6379,slaves=0,sentinels=3
View that the current master is 79
[root@VM-0-13-centos myredis]# /usr/local/bin/redis-cli -p 638179 127.0.0.1:6379> info replication
# Replication role:master connected_slaves:2 slave0:ip=127.0.0.1,port=6380,state=online,offset=700,lag=1 slave1:ip=127.0.0.1,port=6379,state=online,offset=700,lag=1 master_failover_state:no-failover master_replid:bdedf5e127208fafedf2fd67939884de2325c089 master_replid2:0000000000000000000000000000000000000000 master_repl_offset:700 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:1 repl_backlog_histlen:700
Kill the master to see if the Sentry will elect a new master
As shown below, a new master will be found after viewing through other clients
127.0.0.1:6381> SHUTDOWN
colony
effect
The next step is to solve the problem of high concurrency big data storage. The principle of clustering is very simple, that is, add a large number of redis instances and ensure that big data is evenly stored in each redis instance by evenly allocating slots
graphic
As shown in the figure below, three groups of master-slave replication form a cluster to ensure robustness and solve the problem of big data write storage
Deployment steps
Create 6 profiles
# Specify Redis node port port 6370 # Specify the corresponding process file pidfile /var/run/redis_6370.pid # rdb persistent files per node dbfilename dump6370.rdb # It is important to start the cluster cluster-enabled yes #It is important to specify the cluster configuration file for each node cluster-config-file nodes-6370.conf
start-up
/usr/local/bin/redis-server cluster-6379.conf /usr/local/bin/redis-server cluster-6379.conf & /usr/local/bin/redis-server cluster-6380.conf & /usr/local/bin/redis-server cluster-6381.conf & /usr/local/bin/redis-server cluster-6390.conf & /usr/local/bin/redis-server cluster-6391.conf &
Create cluster
/usr/local/bin/redis-cli --cluster create --cluster-replicas 1 127.0.0.1:6370 127.0.0.1:6380 127.0.0.1:6390 127.0.0.1:6371 127.0.0.1:6381 127.0.0.1:6391
Enter any client and use the command to check whether the current cluster is up
if
cluster info
Store data
As shown below, since a cluster is created, you need to add - c to ensure that when storing data, the corresponding data does not belong to the current slot and will jump to another slot
/usr/local/bin/redis-cli -c -p 6370
Hang up the master to see the effect. A new master will be born in 15s
reference
Redis single instance, master-slave mode, sentinel and cluster configuration methods and their advantages and disadvantages
Redis cluster is easy to build