Distributed technology - Redis&FastDFS&RabbitMQ

Phase VII module II

Redis

1. General

1.1 evolution of Internet Architecture

Stage 1: data access is not large, and a simple architecture can be done!

[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-4ntkevue-163305745103) (E: \ markdown \ hook notes \ first stage of redis Architecture)]

Stage 2: the data access volume is large, and the cache technology is used to alleviate the pressure of the database.

Different businesses access different databases

[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-ol4drruk-163305745106) (E: \ markdown \ hook notes \ redis architecture phase II)]

Phase 3:

Master slave read-write separation.

The previous cache can indeed relieve the pressure on the database, but the write and read are concentrated on one database, and the pressure comes again.

One database is responsible for writing and one database is responsible for reading. work in cooperation with a due division of labour. cheerful!

Let the master (master database) respond to transactional (add, delete, modify) operations, and the slave (slave database) respond to non transactional (query) operations, and then use master-slave replication to synchronize the transactional operations on the master to the slave database

mysql master/slave is the standard configuration of the website!

[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-luui1sla-163305745109) (E: \ markdown \ hook notes \ redis architecture phase III)]

Phase 4:

On the basis of master-slave replication and read-write separation of mysql, the main database of mysql began to have a bottleneck

Because MyISAM uses table locks, concurrency performance is particularly poor

Database and table partitioning became popular. mysql also proposed table partitioning. Although it is unstable, we see hope

Let's go, mysql Cluster

[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-eev79tlc-163305745114) (E: \ markdown \ hook notes \ redis architecture phase IV)]

1.2 introduction to redis

1. 3 high Internet demand

High concurrency, high scalability, high performance

2.Redis is a NoSql (not only sql) database that runs fast, has strong concurrency, and runs in memory

3. Advantages of NoSQL database compared with traditional database

NoSQL database can store customized data formats at any time without establishing fields for the data to be stored in advance.

In relational database, adding and deleting fields is a very troublesome thing. For a table with a very large amount of data, adding fields is a nightmare

4. Common usage scenarios of redis

Caching is undoubtedly the most well-known usage scenario of redis today. It is very effective in improving server performance; For frequently accessed data, if the frequently accessed data is placed in the relational database, the cost of each query will be great. However, it is placed in redis, because redis is placed in memory and can be accessed very efficiently

It is very troublesome to use traditional relational databases (mysql, Oracle, etc.) to do this, but it can be easily done by using Redis's sortset data structure;

Calculator / speed limiter: using the atomic self increasing operation in Redis, we can count the number of likes and visits of similar users. If MySQL is used for such operations, frequent reading and writing will bring considerable pressure; The typical usage scenario of the speed limiter is to limit the frequency of a user accessing an API. In case of rush purchase, it is commonly used to prevent unnecessary pressure caused by users' crazy clicks;

Friends, use some commands of the set, such as intersection, union, difference, etc. Can easily handle some common friends, common hobbies and other functions;

Simple message queue. In addition to Redis's own publish / subscribe mode, we can also use list to implement a queue mechanism, such as arrival notification and mail sending. It does not need to be highly reliable, but it will bring great DB pressure. List can be used to complete asynchronous decoupling;

Session sharing. Take jsp as an example. The default session is saved in the server file. If it is a cluster service, the same user may fall on different machines, which will lead to frequent user login; After using Redis to save the session, the user can get the corresponding session information no matter which machine he falls on.

1.3 Redis/Memcache/MongoDB comparison

1.3.1 Redis and Memcache

Redis and Memcache are both in memory databases. However, Memcache can also be used to cache other things, such as pictures, videos, and so on.

memcache has a single data structure kv, and redis is more abundant. It also provides the storage of list, set, hash and other data structures to effectively reduce the number of network IO

Virtual memory – when Redis runs out of physical memory, it can exchange some value s that have not been used for a long time to disk

Security of stored data: after memcache hangs, the data is gone (there is no persistence mechanism); redis can be saved to disk periodically (persistent)

Disaster recovery – after memcache hangs, data cannot be recovered; After redis data is lost, it can be recovered through RBD or AOF

1.3.2 Redis and MongoDB

redis and mongodb are not competitive, but more cooperative and coexisting.

mongodb is essentially a hard disk database. It will still consume a lot of resources when dealing with complex queries, and it is still inevitable to make multiple queries when dealing with complex logic.

In this case, an in memory database such as redis or Memcache is needed as an intermediate layer for caching and acceleration.

For example, in some complex page scenarios, if the contents of the whole page are queried from mongodb, it may take dozens of query statements and take a long time. If the requirements allow, the objects of the whole page can be cached in redis and updated regularly. In this way, mongodb and redis can cooperate well

1.4 distributed database CAP principle

1.4.1 introduction to cap

1. Traditional relational database transactions have ACID:

A: atomicity

C: consistency

I: independence

D: persistence

2. CAP of distributed database:

C (Consistency): strong Consistency

"all nodes see the same data at the same time", that is, after the update operation is successful and returns to the client, the data of all nodes at the same time is completely consistent, which is distributed consistency. The problem of consistency is inevitable in concurrent systems. For the client, consistency refers to how to obtain the updated data during concurrent access. From the server side, it is how to copy and distribute the updates to the whole system to ensure the final consistency of data.

A (Availability): high Availability

Availability refers to "reads and writes always succeeded", that is, the service is always available and has normal response time. Good usability mainly means that the system can serve users well without bad user experience such as user operation failure or access timeout.

P (Partition tolerance): Partition tolerance

That is, when a distributed system encounters a node or network partition failure, it can still provide services that meet consistency or availability.

Partition fault tolerance requires that although the application is a distributed system, it looks as if it is a functioning whole. For example, in the current distributed system, one or several machines are down, and the other remaining machines can operate normally to meet the system requirements, which has no impact on the user experience.

1.4.2 CAP theory

CAP theory is put forward for the distributed database environment. Therefore, the attribute of P must tolerate its existence and must be possessed.

Because P is necessary, we need to choose A and C.

As we all know, in the distributed environment, in order to ensure the system availability, replication is usually adopted to avoid the damage of a node, resulting in the unavailability of the system. Then, there are many copies of data on each node, and it takes time and requires smooth network to copy data from one node to another. Therefore, when P occurs, that is, when data cannot be copied to a node, you have two choices:

Select availability A. at this time, the disconnected node can still provide services to the system, but its data cannot be guaranteed to be synchronized (losing the C attribute).

Select consistency C. in order to ensure the consistency of the database, we must wait for the lost node to recover. In this process, that node is not allowed to provide external services. At this time, the system is unavailable (losing the A attribute).

The most common example is the separation of reading and writing. A node is responsible for writing data and then synchronizing the data to other nodes. Other nodes provide reading services. When there is a communication problem between the two nodes, you are faced with selecting a (continue to provide services, but the data is not guaranteed to be accurate) and C (the user is in a waiting state until the data synchronization is completed).

1.4.3 CAP summary

Zoning is normal and inevitable, and the three cannot coexist

Availability and consistency are a pair of enemies

High consistency and low availability

Low consistency and high availability

Therefore, according to the CAP principle, NoSQL database is divided into three categories: meeting the CA principle, meeting the CP principle and meeting the AP principle:

CA - single point cluster, a system that meets consistency and availability, is usually not very powerful in scalability.

CP - systems that meet consistency and partition tolerance, usually have low performance.

AP - systems that meet availability and partition tolerance may generally have lower requirements for consistency.

2. Download and install

2.1 download

redis: http://www.redis.net.cn/

Graphics tools: https://redisdesktop.com/download

2.2 installation

Although it can be installed on the windows operating system, it is not recommended officially, so we always install it on linux

1. Upload the tar.gz package and unzip it

tar -zxvf redis-5.0.4.tar.gz

2. Install gcc (there must be a network)

yum -y install gcc

If you forget whether it has been installed, you can use the gcc -v command to view the gcc version. If it has not been installed, you will be prompted that the command does not exist

3. Enter the redis directory and compile

make

4. After compiling, start the installation

make install

2.3 operation after installation

2.3.1 background operation mode

1. By default, redis will not run in the background. If you need to modify the configuration file, configure = yes. When your background service is started, it will be written as a process file to run.

vim /opt/redis-5.0.4/redis.conf
daemonize yes

2. Start as configuration file

cd /usr/local/bin
redis-server /opt/redis-5.0.4/redis.conf
2.3.2 closing the database

1. Single instance closing

redis-cli shutdown

2. Close multiple instances

redis-cli -p 6379 shutdown
2.3.3 common operations

1. Check whether port 6379 is listening

netstat -lntp | grep 6379

Why is the port 6379?

6379 is the number corresponding to MERZ on the mobile phone button, and MERZ is taken from the name of Italian singer Alessia Merz. MERZ has long been synonymous with stupidity by antirez (author of redis) and his friends.

2. Check whether the background process exists

ps -ef|grep redis
2.3.4 connect redis and test
redis-cli
ping
2.3.5 HelloWorld
set k1 china	# Save data
get kl			# get data
2.3.6 test performance

1. Press ctrl+c to exit the redis client

redis-benchmark

2. After executing the command, the command will not stop automatically. We need to manually ctrl+c to stop the test

[root@localhost bin]# redis-benchmark
    ====== PING_INLINE ======
    100000 requests completed in 1.80 seconds # 100000 requests were processed in 1.8 seconds. The performance depends on the configuration of the notebook
    50 parallel clients
    3 bytes payload
    keep alive: 1

87.69% <= 1 milliseconds
99.15% <= 2 milliseconds
99.65% <= 3 milliseconds
99.86% <= 4 milliseconds
99.92% <= 5 milliseconds
99.94% <= 6 milliseconds
99.97% <= 7 milliseconds
100.00% <= 7 milliseconds
55524.71 requests per second # Number of requests processed per second
2.3.7 default 16 databases
vim /opt/redis-5.0.4/redis.conf

127.0.0.1:6379> get k1 					# Query k1
"china"
127.0.0.1:6379> select 16 				# Switch to database 16
(error) ERR DB index is out of range 	# The index of the database is out of range
127.0.0.1:6379> select 15 				# Switch to database 15
OK
127.0.0.1:6379[15]> get k1 				# Query k1
(nil)
127.0.0.1:6379[15]> select 0 			# Switch database 0
OK
127.0.0.1:6379> get k1 					# Query k1
"china"
2.3.8 number of database keys
dbsize

redis supports command completion (tab) in linux

2.3.9 clearing the database

1. Empty the current library

flushdb

2. Empty all (16) libraries and use with caution!!

flushall
2.3.10 fuzzy query (key)

The fuzzy query keys command has three wildcards:

1. *: wildcard any number of characters

Query all keys

keys *

Fuzzy query starts with k, followed by any number of characters

keys k*

Fuzzy query e is the last bit and any number of characters in front

keys *e

Double * pattern, matching any number of characters: query the key containing k

keys *k*

2.?: Wildcard single character

Fuzzy query k prefix, and match a character

keys k?

You only remember that the first letter is k and its length is 3

keys k??

3. []: wildcard a character in parentheses

Remember the other letters. The second letter may be a or e

keys r[ae]dis
2.3.11 key

1.exists key: determines whether a key exists

127.0.0.1:6379> exists k1
(integer) 1 # existence
127.0.0.1:6379> exists y1
(integer) 0 # non-existent

2.move key db: move (cut, paste) key to library number

127.0.0.1:6379> move x1 8 	# Move x1 to library 8
(integer) 1 # Move succeeded
127.0.0.1:6379> exists x1 	# Check whether x1 exists in the current library
(integer) 0 # Does not exist (because it has been removed)
127.0.0.1:6379> select 8 	# Switch to library 8
OK
127.0.0.1:6379[8]> keys * 	# View all keys in the current library
1) "x1"

3.ttl key: check how long the key will expire (- 1 will never expire, - 2 has expired)

time to live

127.0.0.1:6379[8]> ttl x1
(integer) -1 # Never expire 

4.expire key seconds: set the expiration time (life countdown) for the key

127.0.0.1:6379[8]> set k1 v1 		# Save k1
OK
127.0.0.1:6379[8]> ttl k1 			# View the expiration time of k1
(integer) -1 # Never expire 
127.0.0.1:6379[8]> expire k1 10 	# Set the expiration time of k1 to 10 seconds (automatically destroy after 10 seconds)
(integer) 1 # Set successfully
127.0.0.1:6379[8]> get k1 			# Get k1
"v1"
127.0.0.1:6379[8]> ttl k1 			# View the expiration time of k1
(integer) 2 # Two seconds to expire
127.0.0.1:6379[8]> get k1
(nil)
127.0.0.1:6379[8]> keys * 			# Destroyed from memory
(empty list or set)

5.type key: view the data type of the key

127.0.0.1:6379[8]> type k1
string # The data type of k1 is string

3. Use Redis

3.1 five data types

Operation document: http://redisdoc.com/

3.1.1 String

1.set/get/del/append/strlen

127.0.0.1:6379> set k1 v1 # Save data
OK
127.0.0.1:6379> set k2 v2 # Save data
OK
127.0.0.1:6379> keys *
1) "k1"
2) "k2"
127.0.0.1:6379> del k2 		# Delete data k2
(integer) 1
127.0.0.1:6379> keys *
1) "k1"
127.0.0.1:6379> get k1 		# Get data k1
"v1"
127.0.0.1:6379> append k1 abc # Append data abc to the value of k1
(integer) 5 # Length of return value (number of characters)
127.0.0.1:6379> get k1
"v1abc"
127.0.0.1:6379> strlen k1 # Returns the length (number of characters) of the k1 value
(integer) 5

2.incr/decr/incrby/decrby: addition and subtraction. The operation must be numeric

incr: it means increase

Decrease: it means decr ease

127.0.0.1:6379> set k1 1 	# The value of initialization k1 is 1
OK
127.0.0.1:6379> incr k1 	# k1 increases by 1 (equivalent to + +)
(integer) 2
127.0.0.1:6379> incr k1
(integer) 3
127.0.0.1:6379> get k1
"3"
127.0.0.1:6379> decr k1 	# k1 minus 1 (equivalent to --)
(integer) 2
127.0.0.1:6379> decr k1
(integer) 1
127.0.0.1:6379> get k1
"1"
127.0.0.1:6379> incrby k1 3 # k1 increases by 3 (equivalent to + = 3)
(integer) 4
127.0.0.1:6379> get k1
"4"
127.0.0.1:6379> decrby k1 2 # k1 minus 2 (equivalent to - = 2)
(integer) 2
127.0.0.1:6379> get k1
"2"

3.getrange/setrange: similar to between... and

Range: range

127.0.0.1:6379> set k1 abcdef 		# The value of initialization k1 is abcdef
OK
127.0.0.1:6379> get k1
"abcdef"
127.0.0.1:6379> getrange k1 0 -1 	# Query k1 all values
"abcdef"
127.0.0.1:6379> getrange k1 0 3 	# Query the value of k1 from subscript 0 to subscript 3 (including 0 and 3, a total of 4 characters are returned)
"abcd"
127.0.0.1:6379> setrange k1 1 xxx 	# Replace the value of k1 with xxx starting from subscript 1
(integer) 6
127.0.0.1:6379> get k1
"axxxef"

4.setex/setnx

set with expir: set the life cycle while adding data

127.0.0.1:6379> setex k1 5 v1 	# While adding k1 v1 data, set a 5-second declaration period
OK
127.0.0.1:6379> get k1
"v1"
127.0.0.1:6379> get k1
(nil) 							# Expired, the value v1 of k1 is automatically destroyed

set if not exist: when adding data, judge whether it already exists to prevent the existing data from being overwritten

127.0.0.1:6379> setnx k1 wei
(integer) 0 		# Adding failed because k1 already exists
127.0.0.1:6379> get k1
"weiwei"
127.0.0.1:6379> setnx k2 wei
(integer) 1 		# k2 does not exist, so it was added successfully

5.mset/mget/msetnx

m: more

127.0.0.1:6379> set k1 v1 k2 v2 		# set does not support adding multiple pieces of data at a time
(error) ERR syntax error
127.0.0.1:6379> mset k1 v1 k2 v2 k3 v3 	# mset can add multiple pieces of data at a time
OK
127.0.0.1:6379> keys *
1) "k1"
2) "k2"
3) "k3"
127.0.0.1:6379> mget k2 k3 			# Get multiple pieces of data at a time
1) "v2"
2) "v3"
127.0.0.1:6379> msetnx k3 v3 k4 v4 	# When adding multiple pieces of data at a time, if there are already existing ones in the added data, it will fail
(integer) 0
127.0.0.1:6379> msetnx k4 v4 k5 v5 	# When adding multiple pieces of data at a time, if none of the added data exists, it is successful
(integer) 1

6.getset: get before set

127.0.0.1:6379> getset k6 v6
(nil) 					# Because there is no k6, get is null, and then add the value of k6 v6 to the database
127.0.0.1:6379> keys *
1) "k4"
2) "k1"
3) "k2"
4) "k3"
5) "k5"
6) "k6"
127.0.0.1:6379> get k6
"v6"
127.0.0.1:6379> getset k6 vv6 # First obtain the value of k6, and then modify the value of k6 to vv6
"v6"
127.0.0.1:6379> get k6
"vv6"
3.1.2 List

Push and pop, similar to machine gun AK47: push, press bullets, pop, shoot bullets

1.lpush/rpush/lrange

l: left from left to right → add (add from top to bottom)

r: right from right to left ← add (add from bottom to top)

127.0.0.1:6379> lpush list01 1 2 3 4 5 	# Add from top to bottom
(integer) 5
127.0.0.1:6379> keys *
1) "list01"
127.0.0.1:6379> lrange list01 0 -1 		# Query all data in list01. 0 indicates the beginning and - 1 indicates the end
1) "5"
2) "4"
3) "3"
4) "2"
5) "1"
127.0.0.1:6379> rpush list02 1 2 3 4 5 	# Add from bottom to top
(integer) 5
127.0.0.1:6379> lrange list02 0 -1
1) "1"
2) "2"
3) "3"
4) "4"
5) "5"

2.lpop/rpop: remove the first element (top, left, bottom, right)

127.0.0.1:6379> lpop list02 # Remove the first element from the left (top) edge
"1"
127.0.0.1:6379> rpop list02 # Remove the first element from the right (bottom) edge
"5"

3.lindex: query elements according to subscripts (from left to right, from top to bottom)

127.0.0.1:6379> lrange list01 0 -1
1) "5"
2) "4"
3) "3"
4) "2"
5) "1"
127.0.0.1:6379> lindex list01 2 # A number from top to bottom with a subscript of 2
"3"
127.0.0.1:6379> lindex list01 1 # A number from top to bottom with a subscript of 1
"4"

4.llen: returns the set length

127.0.0.1:6379> llen list01
(integer) 5

5.lrem: delete n value s

127.0.0.1:6379> lpush list01 1 2 2 3 3 3 4 4 4 4
(integer) 10
127.0.0.1:6379> lrem list01 2 3 	# Remove 2 3 from list01
(integer) 2
127.0.0.1:6379> lrange list01 0 -1
1) "4"
2) "4"
3) "4"
4) "4"
5) "3"
6) "2"
7) "2"
8) "1"

6.ltrim: intercept the value in the specified range, and throw away everything else

​ ltrim key begindex endindex

127.0.0.1:6379> lpush list01 1 2 3 4 5 6 7 8 9
(integer) 9
127.0.0.1:6379> lrange list01 0 -1
1) "9" # Subscript 0
2) "8" # Subscript 1
3) "7" # Subscript 2
4) "6" # Subscript 3
5) "5" # Subscript 4
6) "4" # Subscript 5
7) "3" # Subscript 6
8) "2" # Subscript 7
9) "1" # Subscript 8
127.0.0.1:6379> ltrim list01 3 6 # Intercept the values of subscripts 3 ~ 6 and throw away everything else
OK
127.0.0.1:6379> lrange list01 0 -1
1) "6"
2) "5"
3) "4"
4) "3"

7. Rpop lpush: transfer an element from one set to another (one from the right and one from the left)

127.0.0.1:6379> rpush list01 1 2 3 4 5
(integer) 5
127.0.0.1:6379> lrange list01 0 -1
1) "1"
2) "2"
3) "3"
4) "4"
5) "5"
127.0.0.1:6379> rpush list02 1 2 3 4 5
(integer) 5
127.0.0.1:6379> lrange list02 0 -1
1) "1"
2) "2"
3) "3"
4) "4"
5) "5"
127.0.0.1:6379> rpoplpush list01 list02 # One on the right of list01, and enter the first position of list02 from the left
"5"
127.0.0.1:6379> lrange list01 0 -1
1) "1"
2) "2"
3) "3"
4) "4"
127.0.0.1:6379> lrange list02 0 -1
1) "5"
2) "1"
3) "2"
4) "3"
5) "4"
6) "5"

8.lset: change a value of a subscript

​ lset key index value

127.0.0.1:6379> lrange list02 0 -1
1) "5"
2) "1"
3) "2"
4) "3"
5) "4"
6) "5"
127.0.0.1:6379> lset list02 0 x 	# Change the element with subscript 0 in list02 to x
OK
127.0.0.1:6379> lrange list02 0 -1
1) "x"
2) "1"
3) "2"
4) "3"
5) "4"
6) "5"

9.linsert: insert element (before / after specifying an element)

​ linsert key before/after oldvalue newvalue

127.0.0.1:6379> lrange list02 0 -1
1) "x"
2) "1"
3) "2"
4) "3"
5) "4"
6) "5"
127.0.0.1:6379> linsert list02 before 2 java 	# Enter from the left and insert java before the 2 element in list02
(integer) 7
127.0.0.1:6379> lrange list02 0 -1
1) "x"
2) "1"
3) "java"
4) "2"
5) "3"
6) "4"
7) "5"
127.0.0.1:6379> linsert list02 after 2 redis 	# Enter from the left and insert redis after the 2 element in list02
(integer) 8
127.0.0.1:6379> lrange list02 0 -1
1) "x"
2) "1"
3) "java"
4) "2"
5) "redis"
6) "3"
7) "4"
8) "5"

10. Performance summary: like adding train skin, the head and tail operation efficiency is high, and the middle operation efficiency is poor;

3.1.3 Set

Similar to the set feature in java, it is not allowed to repeat

1. Sadd / smembers / sismber: Add / view / judge whether it exists

127.0.0.1:6379> sadd set01 1 2 2 3 3 3 	# Add elements (automatically exclude duplicate elements)
(integer) 3
127.0.0.1:6379> smembers set01 			# Query set01 set
1) "1"
2) "2"
3) "3"
127.0.0.1:6379> sismember set01 2
(integer) 1 # existence
127.0.0.1:6379> sismember set01 5
(integer) 0 # non-existent

Note: 1 and 0 are not subscripts, but Booleans. 1: true exists, 0: false does not exist

2. Scar: get the number of elements in the set

127.0.0.1:6379> scard set01
(integer) 3 # There are 3 elements in the collection

3.srem: delete elements in the set

​ srem key value

127.0.0.1:6379> srem set01 2 	# Remove element 2 from set01
(integer) 1 # 1 indicates successful removal

4.srandmember: get several elements randomly from the set

srandmember integer (number)

127.0.0.1:6379> sadd set01 1 2 3 4 5 6 7 8 9
(integer) 9
127.0.0.1:6379> smembers set01
1) "1"
2) "2"
3) "3"
4) "4"
5) "5"
6) "6"
7) "7"
8) "8"
9) "9"
127.0.0.1:6379> srandmember set01 3 # Get 3 elements randomly from set01
1) "8"
2) "2"
3) "3"
127.0.0.1:6379> srandmember set01 5 # Get 5 elements randomly from set01
1) "5"
2) "8"
3) "7"
4) "4"
5) "6"

5.spop: random stack (remove)

127.0.0.1:6379> smembers set01
1) "1"
2) "2"
3) "3"
4) "4"
5) "5"
6) "6"
7) "7"
8) "8"
9) "9"
127.0.0.1:6379> spop set01 # Remove an element at random
"8"
127.0.0.1:6379> spop set01 # Remove an element at random
"7"

6.smove: move element: assign a value of key1 to key2

127.0.0.1:6379> sadd set01 1 2 3 4 5
(integer) 5
127.0.0.1:6379> sadd set02 x y z
(integer) 3
127.0.0.1:6379> smove set01 set02 3 # Move element 3 in set01 to set02
(integer) 1 # Move succeeded

7. Mathematical set

Intersection: sinter

Union: sunion

Difference sets: sdiff

127.0.0.1:6379> sadd set01 1 2 3 4 5
(integer) 5
127.0.0.1:6379> sadd set02 2 a 1 b 3
(integer) 5
127.0.0.1:6379> sinter set01 set02 # set01 and set02 co existing elements
1) "1"
2) "2"
3) "3"
127.0.0.1:6379> sunion set01 set02 # Merge all elements in set01 and set02 (exclude duplicate)
1) "5"
2) "4"
3) "3"
4) "2"
5) "b"
6) "a"
7) "1"
127.0.0.1:6379> sdiff set01 set02 # It exists in set01 and does not exist in set02
1) "4"
2) "5"
127.0.0.1:6379> sdiff set02 set01 # It exists in set02 and does not exist in set01
1) "b"
2) "a"
3.1.4 Hash

Similar to the Map in java

KV mode remains unchanged, but V is a key value pair

1.hset/hget/hmset/hmget/hgetall/hdel: Add / get / add more / get more / get all / delete attributes

127.0.0.1:6379> hset user id 1001 		# Add user with id=1001
(integer) 1
127.0.0.1:6379> hget user
(error) ERR wrong number of arguments for 'hget' command
127.0.0.1:6379> hget user id 			# To query user, you must specify a specific field
"1001"
127.0.0.1:6379> hmset student id 101 name tom age 22 # Add a pile of student attributes
OK
127.0.0.1:6379> hget student name 		# Get student name
"tom"
127.0.0.1:6379> hmget student name age 	# Get student name and age
1) "tom"
2) "22"
127.0.0.1:6379> hgetall student 		# Get all student information
1) "id"
2) "101"
3) "name"
4) "tom"
5) "age"
6) "22"
127.0.0.1:6379> hdel student age 		# Delete student age attribute
(integer) 1 # Delete succeeded
127.0.0.1:6379> hgetall student
1) "id"
2) "101"
3) "name"
4) "tom"

2.hlen: returns the number of attributes of an element

127.0.0.1:6379> hgetall student
1) "id"
2) "101"
3) "name"
4) "tom"
127.0.0.1:6379> hlen student
(integer) 2 	# The number of student attributes, id and name, are two attributes in total

3.hexists: judge whether an element has an attribute

127.0.0.1:6379> hexists student name 	# Does the name attribute exist in the student
(integer) 1 # existence
127.0.0.1:6379> hexists student age 	# Does the age attribute exist in the student
(integer) 0 # non-existent

4.hkeys/hvals: get all keys of the attribute / get all value s of the attribute

127.0.0.1:6379> hkeys student # Get all property names of student
1) "id"
2) "name"
127.0.0.1:6379> hvals student # Get the values (contents) of all student attributes
1) "101"
2) "tom"

5.hincrby/hincrbyfloat: Auto increment (integer) / Auto increment (decimal)

127.0.0.1:6379> hmset student id 101 name tom age 22
OK
127.0.0.1:6379> hincrby student age 2 		# Self increasing integer 2
(integer) 24
127.0.0.1:6379> hget student age
"24"
127.0.0.1:6379> hmset user id 1001 money 1000
OK
127.0.0.1:6379> hincrbyfloat user money 5.5 # Self incrementing decimal 5.5
"1005.5"
127.0.0.1:6379> hget user money
"1005.5"

6.hsetnx: when adding, first judge whether it exists

127.0.0.1:6379> hsetnx student age 18 	# When adding, judge whether the age exists
(integer) 0 # Adding failed because age already exists
127.0.0.1:6379> hsetnx student sex male 	# When adding, judge whether sex exists
(integer) 1 # Adding succeeded because sex does not exist
127.0.0.1:6379> hgetall student
1) "id"
2) "101"
3) "name"
4) "tom"
5) "age"
6) "24"
7) "sex"
8) "\xe7\x94\xb7" 	# Chinese can be added, but it is displayed as garbled code (later resolution)
3.1.5 ordered set Zset

Real demand:

Charge 10 yuan to enjoy vip1;

Charge 20 yuan to enjoy vip2;

Charge 30 yuan to enjoy vip3;

And so on

1.zadd/zrange (WithCores): Add / query

127.0.0.1:6379> zadd zset01 10 vip1 20 vip2 30 vip3 40 vip4 50 vip5
(integer) 5
127.0.0.1:6379> zrange zset01 0 -1 				# Query data
1) "vip1"
2) "vip2"
3) "vip3"
4) "vip4"
5) "vip5"
127.0.0.1:6379> zrange zset01 0 -1 withscores 	# Query data with scores
1) "vip1"
2) "10"
3) "vip2"
4) "20"
5) "vip3"
6) "30"
7) "vip4"
8) "40"
9) "vip5"
10) "50"

2.zrangebyscore: fuzzy query

(: not included)

limit: skip and intercept several

127.0.0.1:6379> zrangebyscore zset01 20 40 				# 20 <= score <= 40
1) "vip2"
2) "vip3"
3) "vip4"
127.0.0.1:6379> zrangebyscore zset01 20 (40 			# 20 <= score < 40
1) "vip2"
2) "vip3"
127.0.0.1:6379> zrangebyscore zset01 (20 (40 			# 20 < score < 40
1) "vip3"
127.0.0.1:6379> zrangebyscore zset01 10 40 limit 2 2 	# 10 < = score < = 40, a total of four are returned. Skip the first two and take two
1) "vip3"
2) "vip4"
127.0.0.1:6379> zrangebyscore zset01 10 40 limit 2 1 	# 20 < = score < = 40, a total of four are returned. Skip the first two and take one
1) "vip3"

3.zrem: delete element

127.0.0.1:6379> zrem zset01 vip5 # Remove vip5
(integer) 1

4.zcard/zcount/zrank/zscore: set length / number of elements in the range / element subscript / pass value score

127.0.0.1:6379> zcard zset01 			# Number of elements in the collection
(integer) 4
127.0.0.1:6379> zcount zset01 20 30 	# The score is between 20 and 30. There are several elements in total
(integer) 2
127.0.0.1:6379> zrank zset01 vip3 		# Subscript of vip3 in the set (top-down)
(integer) 2
127.0.0.1:6379> zscore zset01 vip2 		# Get the corresponding score through the element
"20"

5.zrevrank: find subscripts in reverse order (from bottom to top)

127.0.0.1:6379> zrevrank zset01 vip3
(integer) 1

6.zrevrange: reverse order query

127.0.0.1:6379> zrange zset01 0 -1 		# Sequential query
1) "vip1"
2) "vip2"
3) "vip3"
4) "vip4"
127.0.0.1:6379> zrevrange zset01 0 -1 	# Reverse order query
1) "vip4"
2) "vip3"
3) "vip2"
4) "vip1"

7.zrevrangebyscore: reverse order range search

127.0.0.1:6379> zrevrangebyscore zset01 30 20 # Query scores between 30 and 20 in reverse order (Note: write the large value first and then the small value)
1) "vip3"
2) "vip2"
127.0.0.1:6379> zrevrangebyscore zset01 20 30 # If the small value comes first, the result is null
(empty list or set)

3.2 persistence

3.2.1 RDB

Redis DataBase

Write the snapshot of the data set in memory to the disk within the specified time interval;

It is saved in / usr/local/bin by default, and the file name is dump.rdb;

3.2.1.1 automatic backup

Redis is a memory database. Every time we run out of redis and turn off linux, the memory will be released and the data in redis will disappear

Why did yesterday's data still exist when we started redis again?

This is precisely because redis will automatically back up the data to a file: / usr/local/bin/dump.rdb each time it shuts down

Next, let's have a comprehensive understanding of the automatic backup mechanism

1. The default automatic backup policy is not conducive to our test, so modify the automatic backup policy in redis.conf file

vim redis.conf
/SNAP # search

save 900 1 		# The backup will not be automatic until it is changed at least once within 900 seconds
save 120 10 	# The backup will not be automatic until it is changed at least 10 times within 120 seconds
save 60 10000 	# Within 60 seconds, at least 10000 changes will be made before automatic backup

Of course, if you only use Redis's caching function and do not need persistence, you can comment out all save lines to disable the saving function. You can directly use an empty string to deactivate: save ""

2. Use shutdown to simulate shutdown. Before and after shutdown, compare the update time of dump.rdb file

Note: when we use the shutdown command, redis will automatically back up the database, so the creation time of dump.rdb file is updated

3. Start redis, save 10 pieces of data within 120 seconds, and then check the update time of dump.rdb file (open two terminal windows for easy viewing)

4. The action of saving 10 pieces of data within 120 seconds triggers the backup instruction. At present, 10 pieces of data are saved in dump.rdb file. Copy dump.rdb to dump10.rdb. At this time, 10 pieces of data are saved in both files

5. Now that the data has been backed up, we wantonly delete all the data, flush all, and shut down again

6. Start redis again and find that the data has really disappeared and the contents of dump.rdb file have not been restored to redis as we thought. Why?

Because when we save more than 10 pieces of data, the data is backed up;

Then delete the database and back up the data in the file;

However, the problem lies in shutdown. Once this command is executed, it will be backed up immediately. The deleted empty database will be generated into a backup file, and the backup file containing 10 pieces of data will be overwritten. Therefore, the result shown in the figure above appears. Automatic recovery failed.

How to solve this problem? To back up the backup file again

7. Delete the dump.rdb file and rename dump10.rdb to dump.rdb

8. Start the redis service, log in to redis, 10 pieces of data, all restored!

3.2.1.2 manual backup

Before automatic backup, a lot of data must be changed. For example, we changed more than ten pieces of data before automatic backup;

Now, I only save one piece of data and want to back up immediately. What should I do?

Every time the operation is completed, execute the command save to immediately back up

3.2.1.3 RDB related configuration

1. Stop writes on bgsave error: water inlet and outlet, whether the water outlet fails or not

yes: when the background backup generates an error, the foreground stops writing

No: no matter life or death, it's going in

2.rdbcompression: for snapshots stored on disk, whether to start LZF compression algorithm will generally be started. Because of this performance, buy another computer and complete N round trips.

yes: start

no: do not start (you can turn it off if you do not want to consume CPU resources)

3.rdbchecksum: whether to start the CRC64 algorithm for data verification after storing the snapshot;

After startup, the CPU consumption will be increased by about 10%;

If you want to get the maximum performance improvement, you can choose to close;

4.dbfilename: name of snapshot backup file

5.dir: the directory where the snapshot backup file is saved. The default is the current directory

Advantages and disadvantages

Excellent: suitable for large-scale data recovery, with low requirements for data integrity and consistency;

Bad: backup at regular intervals. If you accidentally down, you will lose all the modifications of the last snapshot

3.2.2 AOF

Append Only File records each write operation in the form of log;

Record all the write instructions executed by redis (the read operation is not recorded);

Only additional files are allowed, and files cannot be rewritten;

redis will read the file at the beginning of startup and execute it from beginning to end to rebuild the data;

3.2.2.1 start AOF

1. In order to avoid mistakes, it is best to back up the redis.conf general configuration file, and then modify the contents as follows:

appendonly yes
appendfilename appendonly.aof

2. Restart redis to start with a new configuration file

redis-server /opt/redis5.0.4/redis.conf

3. Connect redis, add data, delete database and exit

4. View one more aof file in the current folder, and see the contents in the file. All saved are write operations

The last sentence in the file should be deleted, otherwise the data cannot be recovered

Edit this file, and finally: wq! Enforcement

5. Only need to reconnect, and the data recovery is successful

3.2.2.2 coexistence? Who comes first?

Let's check the redis.conf file. AOF and RDB backup strategies can be started at the same time. How will the system choose?

1. Try it, edit appendonly.aof, mess with code, save and exit

2. Failed to start redis, so AOF is loaded first to recover the original data! Because AOF is more complete than RDB!

3. Fix the AOF file and kill all the codes that do not comply with the redis syntax specification

reids-check-aof --fix appendonly.aof
3.2.2.3 AOF related configuration

1.appendonly: enable aof mode

2.appendfilename: the file name of aof. You'd better not change it!

3.appendfsync: write back strategy

always: every time the data changes, it will be recorded to the disk immediately. The performance is poor, but the data integrity is good

everysec: the default setting, asynchronous operation, records per second. If the machine goes down within one second, data will be lost

No: no tracing

4. No Appendfsync on rewrite: whether to use Appendfsync tracing strategy during rewriting; Use the default no to ensure data security.

Aof uses the method of file addition, and the file will become larger and larger. In order to solve this problem, an rewriting mechanism is added. Redis will automatically record the size of the last AOF file. When the size of the AOF file reaches the preset size, redis will start the AOF file to compress the content, and only retain the minimum instruction set that can recover the data

5. Auto AOF rewrite percentage: if the size of the AOF file has exceeded 100%, that is, doubled, the compression is rewritten

6. Auto AOF rewrite min size: if the AOF file has exceeded 64mb, overwrite the compression

3.2.3 summary (how to choose?)

RDB: it is only used for backup. It is recommended to back up once every 15 minutes

AOF:

In the worst case, only less than 2 seconds of data is lost. The data integrity is relatively high, but the cost is too high, which will bring continuous IO

The requirements for the size of hard disk are also high. The default 64mb is too small, and the enterprise level is at least 5G or more;

The master/slave to learn later is the choice of sina Weibo!!

3.3 transactions

Multiple commands can be executed at one time. It is a command group. In a transaction, all commands will be serialized (queued) and will not be cut in the queue;

In a queue, a series of commands are executed at one time, sequentially and exclusively

Three characteristics

Isolation: all commands will be executed in order. During the execution of transactions, they will not be interrupted by commands sent by other clients

There is no isolation level: the commands in the queue will not be actually executed until they are submitted. There is no headache that "the query in the transaction should see the updates in the transaction, but the query outside the transaction cannot"

Atomicity is not guaranteed: if a command fails, other commands may be executed successfully without rollback

Three steps

Turn on multi

queued

Execute exec

Relational database transactions

multi: can be understood as begin in relational transactions

exec: it can be understood as a commit in a relational transaction

discard: it can be understood as a rollback in a relational transaction

3.3.1 birth together

Start the transaction, join the queue, execute together, and succeed

127.0.0.1:6379> multi 		# Open transaction
OK
127.0.0.1:6379> set k1 v1
QUEUED 				# Join queue
127.0.0.1:6379> set k2 v2
QUEUED 				# Join queue
127.0.0.1:6379> get k2
QUEUED 				# Join queue
127.0.0.1:6379> set k3 v3
QUEUED 				# Join queue
127.0.0.1:6379> exec 		# Execution, success together!
1) OK
2) OK
3) "v2"
4) OK
3.3.2 death together

Discard the previous operation and return to the original value

127.0.0.1:6379> multi 		# Open transaction
OK
127.0.0.1:6379> set k1 v1111
QUEUED
127.0.0.1:6379> set k2 v2222
QUEUED
127.0.0.1:6379> discard 	# abort operation
OK
127.0.0.1:6379> get k1
"v1" 						# Or the original value
3.3.3 a grain of mouse excrement spoils a pot of soup

An error is reported, all are cancelled and restored to the original value

127.0.0.1:6379> multi
OK
127.0.0.1:6379> set k4 v4
QUEUED
127.0.0.1:6379> setlalala 	# An error is reported
(error) ERR unknown command `setlalala`, with args beginning with:
127.0.0.1:6379> set k5 v5
QUEUED
127.0.0.1:6379> exec 		# Cancel all commands in the queue
(error) EXECABORT Transaction discarded because of previous errors.
127.0.0.1:6379> keys * 		# Or the original value
1) "k2"
2) "k3"
3) "k1"
3.3.4 the head of grievance and the owner of debt

Accountability, whose fault, who to find

127.0.0.1:6379> multi
OK
127.0.0.1:6379> incr k1 	# Although v1 cannot + +, there is no error when adding to the queue, which is similar to compiling in java
QUEUED
127.0.0.1:6379> set k4 v4
QUEUED
127.0.0.1:6379> set k5 v5
QUEUED
127.0.0.1:6379> exec
1) (error) ERR value is not an integer or out of range 	# An error is reported when it is actually executed
2) OK # success
3) OK # success
127.0.0.1:6379> keys *
1) "k5"
2) "k1"
3) "k3"
4) "k2"
5) "k4"
3.3.5 watch monitoring

Test: simulate revenue and expenditure

Under normal conditions:

127.0.0.1:6379> set in 100 		# Income 100 yuan
OK
127.0.0.1:6379> set out 0 		# Expenditure 0 yuan
OK
127.0.0.1:6379> multi
OK
127.0.0.1:6379> decrby in 20 	# Revenue - 20
QUEUED
127.0.0.1:6379> incrby out 20 	# Expenditure + 20
QUEUED
127.0.0.1:6379> exec
1) (integer) 80
2) (integer) 20 # As a result, no problem!

Under special circumstances:

127.0.0.1:6379> watch in 	# Monitor revenue in
OK
127.0.0.1:6379> multi
OK
127.0.0.1:6379> decrby in 20
QUEUED
127.0.0.1:6379> incrby out 20
QUEUED
127.0.0.1:6379> exec
(nil) # Before exec, I opened another window (thread) and modified the monitored in, so this transaction will be interrupted (invalidated), similar to "optimistic lock"

unwatch: cancels the operation of the watch command on all key s

Once the exec command is executed, all the previous monitoring will automatically become invalid!

3.4 publish and subscribe to redis

A mode of message communication between processes: the sender (pub) sends messages and the subscriber (sub) receives messages. For example: wechat subscription number

Subscribe to one or more channels

127.0.0.1:6379> subscribe cctv1 cctv5 cctv6 	# 1. Subscribe to three channels
Reading messages... (press Ctrl-C to quit)
1) "subscribe"
2) "cctv1"
3) (integer) 1
1) "subscribe"
2) "cctv5"
3) (integer) 2
1) "subscribe"
2) "cctv6"
3) (integer) 3
1) "message"		# 3. CCTV 5 receives the pushed information
2) "cctv5"
3) "NBA"
127.0.0.1:6379> publish cctv5 NBA 	# 2. Send message to CCTV 5
(integer) 1

3.5 master slave replication

This is the strategy of redis cluster

With the slave (Library) not with the master (Library): the younger brother can choose who is the eldest brother, but the eldest brother has no right to choose the younger brother

Read write separation: host write and slave read

3.5.1 one master and two servants

1. Prepare three servers and modify redis.conf

bind 0.0.0.0

2. Start the three redis and check the roles of each machine. They are all master s

info replication

3. Test start

3.1) first, empty all three machines and add value to the first one

mset k1 v1 k2 v2

3.2) copy the other two machines (find the eldest brother)

slaveof 192.168.44.129 6379

3.3) add value for the first set

set k3 v3

Think 1: can you get k1 and k2 before slave?

You can get it. As long as you follow brother, the previous data will be synchronized immediately

Think 2: can I get the k3 after slave?

You can get it. As long as you follow brother, the data will be synchronized immediately

Think 3: add k4 at the same time, what is the result?

The master (129master) can be added successfully, but the slave (130 and 131 are slave) fails. The slave is only responsible for reading data and has no right to write data. This is "read-write separation"

Thinking 4: how about host shutdown and slave shutdown?

130 and 131 are still slave and show that their master is offline

Thinking 5: what if the host restarts and the slave restarts?

130 and 131 are still slave and show that their master is online

Thinking 6: the slave is dead. How about the host?

Is the identity changed after returning from the plane? The host has not changed, but a slave is missing

The master and slave have not changed, but the slave after restart has become the master, not together with the original cluster

3.5.2 blood transmission

Theoretically, a host can have multiple slaves, but in this case, the host will be very tired

We can use transitivity in java object-oriented inheritance to solve this problem and reduce the burden on the host

Three generations of ancestors and grandchildren were formed:

127.0.0.1:6379> slaveof 192.168.44.129 6379 # 130 follow 129
OK
127.0.0.1:6379> slaveof 192.168444.130 6379 # 131 follow 130
OK
3.5.3 power seeking and usurpation

One master and two slaves. When one master hangs up, only one master can be selected from the two slaves

The country cannot live without a king, and the army cannot live without a commander

Manually select the boss

Simulation test: 1 is the master, 2 and 3 are slave. When 1 hangs up, 2 usurps power as the master, and 3 and 2

slaveof no one 				# No one can make me surrender, then I'm the boss
slaveof 192.168.44.130 6379 # 3 follow No. 2

Think: what happens when 1 returns again?

2 and 3 have formed new clusters and have no relationship with 1. So 1 became the bare pole commander

3.5.4 replication principle

[the external chain image transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-x48cmhgl-163305745120) (E: \ markdown \ hook notes \ redis master-slave copy principle)]

After completing the above steps, all operations of data initialization from the server are completed. At this time, the server can receive read requests from users

Full copy: in the slave initialization phase, the slave needs to copy a copy of all data on the Master. After the slave receives the data file, save it and load it into memory; (step 1234)

Incremental replication: after Slave initialization, the write operations of the master server are synchronized to the Slave server when it starts normal operation; (step 56)

However, as long as the master is reconnected, one-time (full replication) synchronization will be performed automatically;

Redis master-slave synchronization policy:

When the master and slave are just connected, full synchronization is performed;

After full synchronization, perform incremental synchronization. Of course, the slave can initiate full synchronization at any time if necessary.

The redis strategy is that, in any case, incremental synchronization will be attempted first. If unsuccessful, full synchronization will be required from the slave.

3.5.5 sentinel mode

Automatic version of power usurpation!

A sentry was patrolling and suddenly found!!!!! When the boss hangs up, the younger brothers will vote automatically to elect a new boss from the younger brothers

Sentinel is Redis's high availability solution:

The Sentinel system composed of one or more Sentinel instances can monitor any number of master servers and all slave servers. When the monitored master server enters the offline state, it will automatically upgrade a slave server under the offline master server to a new master server, and then the new master server will continue to process command requests instead of the offline master server

Simulation test

  1. 1 master, 2 and 3 slave
  2. Create a configuration file sentinel.conf in each server. The name must not be wrong, and edit sentinel.conf
# sentinel monitor monitored host name (custom) ip port votes
sentinel monitor redis129 192.168.44.129 6379 1
  1. Sequence of service startup: primary redis -- > secondary redis -- > sentinel1 / 2 / 3
redis-sentinel sentinel.conf
  1. Hang up boss 1, and the background will automatically launch a fierce vote to elect a new boss
127.0.0.1:6379> shutdown
not connected> exit
  1. View the allocation of final rights

    3 becomes the new boss, 2 is still a little brother

  2. What if the former boss comes back again?

    No. 1 returned again and became the master, on an equal footing with No. 3

    After a few seconds, the sentry detected the return of machine 1. Don't play on machine 1 by yourself and enter the collective, but the new boss has emerged. You can only enter the collective again as a little brother!

3.5.6 disadvantages

Because all write operations are completed on the master;

Then it is synchronized to the slave, so the communication between the two machines will be delayed;

When the system is very busy, the delay problem will increase;

As the number of slave machines increases, the problem will also increase

3.6 general configuration redis.conf details

# Redis profile example
# Note: when the memory size needs to be configured, you may need to specify common formats such as 1k,5GB,4M, etc
#
# 1k => 1000 bytes
# 1kb => 1024 bytes
# 1m => 1000000 bytes
# 1mb => 1024*1024 bytes
# 1g => 1000000000 bytes
# 1gb => 1024*1024*1024 bytes
#
# The unit is case insensitive 1GB. 1GB 1GB is the same.

################################## INCLUDES Include file related###################################

# You can include one or more other configuration files here. If you have a standard configuration template applicable to all Redis servers
# However, you also need some customized settings for each server, which will be very useful. The included configuration file can also contain other configuration files
 Pieces,
# Therefore, it is necessary to use this function carefully.
#
# Note that the "include" option cannot be overridden by the "CONFIG REWRITE" command of admin or Redis sentinel.
# Because Redis always uses the last resolved configuration line as the value of the configuration instruction, you'd better configure includes at the beginning of this file
# Avoid it overriding the configuration at run time.
# On the contrary, if you want to overwrite the original configuration with the configuration of includes, you'd better use include at the end of the file
#
# include /path/to/local.conf
# include /path/to/other.conf

################################ GENERAL Comprehensive configuration#####################################

# By default, Rdis does not run as a daemon. Configure to 'yes' if necessary
# Note that after being configured as a daemon, Redis will write the process number to the file / var/run/redis.pid
daemonize no

# When running as a daemon, Redis will write the process ID to / var/run/redis.pid by default. You can modify the path here.
pidfile /var/run/redis.pid

# The specific port that accepts the connection. The default is 6379
# If the port is set to 0, Redis will not listen to TCP sockets.
port 6379

# TCP listen() backlog.
# The size of the SYN queue when the server establishes a tcp connection with the client
# In a high concurrency environment, you need a high backlog value to avoid slow client connection problems. Note that the Linux kernel silently decreases this value
# To the value of / proc/sys/net/core/somaxconn, so confirm to increase somaxconn and tcp_max_syn_backlog
# Two values to achieve the desired effect.
tcp-backlog 511

# By default, Redis monitors the connections of all available network interfaces on the server. It can be implemented with "bind" configuration instruction and one or more ip addresses
# Listen to one or more network interfaces
#
# Example:
#
# bind 192.168.1.100 10.0.0.1
# bind 127.0.0.1

# Specifies the path used to listen to Unix sockets. There is no default value, so Redis will not listen to Unix sockets if it is not specified
#
# unixsocket /tmp/redis.sock
# unixsocketperm 755

# How many seconds after a client is idle, close the connection. (0 means disabled and never closed)
timeout 0

# TCP keepalive.
#
# If non-zero, set so_ The keepalive option is used to send acks to idle clients, which is useful for two reasons:
#
# 1) Ability to detect unresponsive peers
# 2) Let the network device in the middle of the connection know that the connection is still alive
#
# On Linux, the specified value (in seconds) is the time interval for sending ACK.
# Note: it takes twice the time value to close this connection.
# On other kernels, this interval is determined by the kernel configuration
#
# A reasonable value for this option is 60 seconds
tcp-keepalive 0

# Specify server debug level
# Possible values:
# debug (a lot of information, useful for development / testing)
# verbose (a lot of useful information, but not as much as the debug level)
# notice (appropriate amount of information, basically what you need in your production environment)
# warning (only very important / serious information will be recorded)
loglevel notice

# Indicates the log file name. You can also use "stdout" to force Redis to write log information to standard output.
# Note: if Redis runs as a daemon and the log is set to be displayed in standard output, the log will be sent to / dev/null
logfile ""
# To use the system logger, just set "syslog enabled" to "yes".
# Then set other syslog parameters as needed.
# syslog-enabled no
# Indicate syslog identity
# syslog-ident redis
# Indicates the device of the syslog. Must be one of user or LOCAL0 ~ LOCAL7.
# syslog-facility local0
# Set the number of databases. The default database is DB 0,
# You can use different databases for each connection by selecting < dbid > (0 < = dbid < ='databases' - 1).
databases 16

############################# SNAPSHOTTING Snapshot, persistent operation configuration #############################
#
# Save the database to disk:
#
# save <seconds> <changes>
#
# The database will be written to disk after specifying the number of seconds and the number of data changes.
#
# The following example will write data to disk:
# After 900 seconds (15 minutes) and at least one change
# After 300 seconds (5 minutes) and at least 10 changes
# After 60 seconds and at least 10000 changes
#
# Note: if you don't want to write to the disk, just comment out all the "save" settings.
#
# All previously configured save instructions can also be removed by adding a save instruction with an empty string parameter
# Like the following example:
# save ""

save 900 1
save 300 10
save 60 10000

# By default, if the RDB snapshot (at least one save instruction) is enabled and the latest background save fails, Redis will stop accepting write operations
# This will let users know that the data is not correctly persisted to the hard disk, otherwise no one may notice and cause some disasters.
#
# If the background saving process can restart, Redis will automatically allow write operations
#
# However, if you have deployed appropriate Redis servers and persistent monitoring, you may want to turn off this function to facilitate even
# There are problems with the hard disk and permissions, and Redis can work normally as usual,
stop-writes-on-bgsave-error yes

# Is LZF used to compress string objects when exporting to an. rdb database?
# The default setting is "yes" because it is good in almost any case.
# If you want to save CPU, you can set this to "no", but if you have compressible key and value,
# Then the data file will be larger.
rdbcompression yes

# Because version 5 RDB has a checksum of CRC64 algorithm at the end of the file. This will make the file format more reliable, but in
# There is a performance cost (about 10%) when producing and loading RDB files, so you can turn it off to get the best performance.
#
# The generated RDB file with verification turned off has a checksum of 0, which will tell the loading code to skip the check
rdbchecksum yes

# The file name of the persistent database
dbfilename dump.rdb

# working directory
#
# The database will be written to this directory, and the file name is the value of "dbfilename" above.
#
# The accumulation file is also put here.
#
# Note that what you specify here must be a directory, not a file name.
dir ./

############################### REPLICATION Configuration of master-slave replication ###############################
# Master slave synchronization. The backup of Redis instances is realized through the slaveof instruction.
# Note that data is copied locally and remotely. In other words, there can be different database files, different IP bindings, and monitoring
# Different ports.
#
# slaveof <masterip> <masterport>

# If the master is password protected (configured through the "requirepass" option), the slave must be protected before starting synchronization
# Authenticate, or its synchronization request will be rejected.
#
# masterauth <master-password>

# When a slave loses its connection to the master, or synchronization is in progress, there are two possible behaviors of the slave:
#
# 1) If slave serve stale data is set to "yes" (the default), slave will continue to respond to client requests,
# It may be normal data or empty data whose value has not been obtained.
# 2) If slave serve stale data is set to "no", slave will reply "synchronizing from master"
# (SYNC with master in progress) "to process various requests, except INFO and SLAVEOF commands.
#
slave-serve-stale-data yes

# You can configure whether the save instance accepts write operations. Writable slave instances may be useful for storing temporary data (because writing to save
# The data will be deleted after synchronization with the master), but some problems may occur when the client writes due to configuration errors.
#
# From redis 2.6, all slave are read-only by default
#
# Note: read only slave is not designed to expose untrusted clients on the Internet. It is just a protective layer against instance misuse.
# A read-only slave supports all management commands, such as config,debug, etc. To limit, you can use 'rename command' to
# Hide all management and dangerous commands to enhance the security of read-only slave
slave-read-only yes

# The slave sends a ping request to the master according to the specified time interval.
# The time interval can be through repl_ping_slave_period.
# The default is 10 seconds.
#
# repl-ping-slave-period 10

# The following options set the timeout for synchronization
#
# 1) A large amount of data is transmitted between slave and master SYNC, resulting in timeout
# 2) From the slave perspective, the master times out, including data and ping
# 3) From the master point of view, the slave timeout occurs when the master sends REPLCONF ACK pings
#
# Ensure that this value is greater than the specified repl Ping slave period, otherwise a timeout will be detected every time when the traffic between the master and slave is not high
#
# repl-timeout 60

# Disable TCP after slave socket sends SYNC_ NODELAY ?
#
# If you select "yes", Redis will use less TCP packets and bandwidth to send data to slaves. However, this will cause the data to be transferred to the slave
# There is a delay, and the default configuration of the Linux kernel will reach 40 milliseconds
#
# If you select "no", the delay of data transmission to save will be reduced, but more bandwidth will be used
#
# By default, we will optimize for low latency, but set this option to "yes" in case of high traffic or too many hops between master and slave
# It's a good choice.
repl-disable-tcp-nodelay no

# Set the backlog size of data backup. The backlog is a buffer that records the salve data when the slave is disconnected for a period of time,
# Therefore, when a slave reconnects, it does not need full synchronization, but an incremental synchronization is sufficient. It will be in the disconnected period
# Part of the data lost by the slave in time is transmitted to it.
#
# The larger the synchronized backlog, the longer the slave can perform incremental synchronization and allow disconnection.
#
# The backlog is allocated only once and requires at least one slave connection
#
# repl-backlog-size 1mb

# When the master does not connect to any slave for a period of time, the backlog will be released. The following options are configured from the last
# How many seconds after the slave is disconnected and started to count, the backlog buffer will be released.
#
# 0 means never release the backlog
#
# repl-backlog-ttl 3600

# The priority of the slave is an integer, which is displayed in the Info output of Redis. If the master is no longer working properly, the Sentry will use it to
# Select a slave = upgrade to master.
#
# The salve with low priority number will be promoted to master first, so for example, there are three slave priorities of 10100 and 25 respectively,
# The sentinel will pick a slave with a minimum priority of 10.
#
# 0 is a special priority, which indicates that this slave cannot be used as a master, so a slave with priority 0 will never be used
# Sentry selection promoted to master
#
# The default priority is 100
slave-priority 100

# If the master has fewer than N connected slave s with a delay of less than or equal to M seconds, it can stop receiving write operations.
#
# N slave s need to be in "online" status
#
# The delay is in seconds and must be less than or equal to the specified value. It is the last ping received from the slave (usually sent per second)
# Start counting.
#
# This option does not GUARANTEES that N replicas will accept the write, but
# will limit the window of exposure for lost writes in case not enough slaves
# are available, to the specified number of seconds.
#
# For example, at least three slave s with a delay of less than or equal to 10 seconds are required. Use the following command:
#
# min-slaves-to-write 3
# min-slaves-max-lag 10
#
# Setting either to 0 disables this feature.
#
# The default min saves to write value is 0 (this function is disabled) and the Min saves Max lag value is 10.

################################# SECURITY Safety related configuration ##################################

# The client is required to verify the identity and password when processing any command.
# This function is very useful in environments where other clients you do not trust can access the redis server.
#

# This paragraph should be commented out for backward compatibility. And most people don't need authentication (for example, they run on their own servers)
#
# Warning: because Redis is too fast, people outside can try 150k passwords per second to try to crack them. That means you need
# A high-strength password, otherwise it will be too easy to crack.
#
# requirepass foobared
# Command rename
#
# In a shared environment, you can change the name for dangerous commands. For example, you can change CONFIG to another name that is not easy to guess,
# In this way, internal tools can still be used, but ordinary clients will not.
#
# For example:
#
# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
#
# You can also disable a command completely by renaming it to an empty string
#
# rename-command CONFIG ""
#
# Please note: changing the command name to be recorded in the AOF file or transferred to the slave server may cause problems.

################################## LIMITS Range configuration ###################################

# Set the maximum number of clients connected at the same time. The default limit is 10000 clients. However, if the Redis server cannot be configured
# If the file limit is processed to meet the specified value, the maximum number of client connections is set to the current file limit minus 32 (because
# Some file descriptors are reserved for Redis server for internal use)
#
# Once this limit is reached, Redis will close all new connections and send the error 'max number of clients reached'
#
# maxclients 10000
# Do not use more memory than the set upper limit. Once the memory usage reaches the upper limit, Redis will recycle according to the selected recycling strategy (see:
# Maxmemory Policy) delete key
#
# If Redis cannot delete the key because of the deletion policy, or if the policy is set to "noeviction", Redis will reply that it needs to be updated
# Multiple memory error messages to the command. For example, SET,LPUSH, etc., but will continue to respond to read-only commands such as Get.
#
# When Redis is used as the LRU cache or the hard memory limit is set for the instance (use the "noeviction" policy)
# This option is usually useful when you are.
#
# Warning: when more than one slave is connected to an instance that has reached the maximum memory limit, the master is required to synchronize the slave's output buffer
# Memory is not calculated in use memory. In this way, when the key is expelled, it will not be triggered by network problems / resynchronization events
# In turn, the slave's output buffer is filled with DEL commands with expelled keys, which will trigger the deletion of more keys,
# Until the database is completely emptied
#
# In short... If you need to attach more than one slave, it is recommended that you set a slightly smaller maxmemory limit so that the system will be idle
# The memory of is used as the output buffer of the slave (but it is not necessary if the maximum memory policy is set to "no eviction")
#
# maxmemory <bytes>
# Maximum memory policy: how Redis chooses to delete key s if the memory limit is reached. You can choose from the following five behaviors:
#
# Volatile LRU - > delete according to the expiration time generated by LRU algorithm.
# Allkeys LRU - > delete any key according to LRU algorithm.
# Volatile random - > randomly delete key s according to expiration settings.
# Allkeys - > Random - > Random deletion without difference.
# Volatile TTL - > delete according to the latest expiration time (supplemented by TTL)
# Noeviction - > no one deletes it, and an error is returned directly during the write operation.
#
# Note: for all policies, if Redis cannot find a suitable key to delete, an error will be returned during the write operation.
#
# Commands involved so far: set setnx setex append
# incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
# sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
# zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
# getset mset msetnx exec sort
#
# The default values are as follows:
#
# maxmemory-policy volatile-lru
# The implementation of LRU and minimum TTL algorithm is not very accurate, but it is very close (to save memory), so you can use the sample size for detection.
# For example, by default, Redis will check three key s and take the oldest one. You can set the number of samples through the following configuration instructions.
#
# maxmemory-samples 3

############################## APPEND ONLY MODE AOF Mode configuration ###############################

# By default, Redis exports data to disk asynchronously. This mode is good enough in many applications, but the Redis process
# A problem or power failure may result in a loss of write operations for a period of time (depending on the configured save instruction).
#
# AOF is an alternative persistence mode that provides more reliability, such as using the default data write file policy (see configuration later)
# In case of emergencies such as server power failure or single write, Redis has problems with its own process, but the operating system is still running normally, Redis
# Can only lose 1 second of write operations.
#
# AOF and RDB persistence can be started simultaneously without problems.
# If AOF is enabled, Redis will load AOF files at startup, which can better ensure the reliability of data.
#
# Please check http://redis.io/topics/persistence  For more information

appendonly no

# Pure accumulation file name (default: "appendonly.aof")

appendfilename "appendonly.aof"

# The fsync() system call tells the operating system to write data to disk instead of waiting for more data to enter the output buffer.
# Some operating systems will really brush the data to the disk immediately; Others will try to do so as soon as possible.
#
# Redis supports three different modes:
#
# no: don't brush immediately, only when the operating system needs to brush. Faster.
# always: each write operation is immediately written to the aof file. Slow, but safest.
# everysec: write once per second. A compromise.
#
# The default "everysec" usually achieves a good balance between speed and data security. According to your understanding
# It is decided that if you can relax the configuration to "no" for better performance (but if you can tolerate some data loss, you can consider using it
# The default snapshot persistence mode), or on the contrary, using "always" is slower but safer than everysec.
#
# Please check the article below for more details
# http://antirez.com/post/redis-persistence-demystified.html
#
# If you are not sure, use "everysec"
# appendfsync always

appendfsync everysec

# appendfsync no
# If the synchronization policy of AOF is set to "always" or "everysec", and the background storage process (background storage or write to AOF)
# Logging) incurs a lot of disk I/O overhead. Under some Linux configurations, Redis will be blocked for a long time because of the fsync() system call.
# Note that this situation has not been perfectly corrected, and even fsync() of different threads will block our synchronous write(2) call.
#
# To alleviate this problem, you can use the following option. It can block fsync() during BGSAVE or BGREWRITEAOF processing.
#
# This means that if a child process is saving, Redis is in an "out of sync" state.
# This actually means that in the worst case, 30 seconds of log data may be lost. (default Linux settings)
#
# If setting this to "yes" causes latency problems, keep "no", which is the safest way to save persistent data.

no-appendfsync-on-rewrite no

# Auto overwrite AOF file
# If the AOF log file increases to the specified percentage, Redis can automatically rewrite the AOF log file through BGREWRITEAOF.
#
# Working principle: Redis remembers the size of the AOF file when it was last rewritten (if there is no write operation after restart, the AOF size at startup will be used directly)
#
# This benchmark size is compared with the current size. If the current size exceeds the specified scale, the overwrite operation is triggered. You also need to specify to be overridden
# The minimum size of the log, so as to avoid rewriting when the specified percentage is reached but the size is still very small.
#
# Specifying a percentage of 0 disables the AOF auto override feature.

auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb

################################ LUA SCRIPTING ###############################
# Set the maximum running time of lua script in milliseconds. redis will record a log and return error. When a script exceeds the maximum time limit. Only SCRIPT KILL and SHUTDOWN NOSAVE can be used. The first thing that can kill without adjusting the write command. If you have called write, you can only kill with the second command.
lua-time-limit 5000

################################## SLOW LOG ###################################
# Redis is a log system used to record the execution time of slow queries. Because the slowlog is only stored in memory, the efficiency of the slowlog is very high. There is no need to worry about affecting the performance of redis.
# Only queries whose execution time is greater than slowlog log slow than are defined as slow queries and recorded by slowlog.
# The unit is subtle
slowlog-log-slower-than 10000

# Slowlog Max len indicates the maximum number of slow queries
slowlog-max-len 128

############################ EVENT NOTIFICATION ##############################
# This function allows the client to know the changes of keys in the database and the execution of commands in the database by subscribing to a given channel or mode. Therefore, this function is turned off by default.
# The parameter of notify keyspace events can be any combination of the following characters, which specifies which types of notifications the server should send:
# K-key space notifications, all notifications in__ keyspace@__   Prefix
# E key event notification, all notifications in__ keyevent@__   Prefix
# G notification of general commands irrelevant to types such as del, exhibit, RENAME, etc
# $string command notification
# l notification of list commands
# Notification of s set command
# h notification of hash command
# Notification of z ordered set command
# x expiration event: sent whenever an expiration key is deleted
# e evict event: sent whenever a key is deleted due to the maxmemory policy
# Alias of A parameter g$lshzxe
# There must be at least one K or E in the input parameters. Otherwise, no notification will be distributed regardless of the other parameters. For detailed use, please refer to http://redis.io/topics/notifications

notify-keyspace-events ""

############################### ADVANCED CONFIG ###############################
# Unit byte: ziplist is used if the data amount is less than or equal to hash Max ziplist entries, and hash is used if it is greater than hash Max ziplistentries
hash-max-ziplist-entries 512
# Ziplist is used if the value size is less than or equal to hash Max ziplist value, and hash is used if it is greater than hash Max ziplist value.
hash-max-ziplist-value 64

# If the data volume is less than or equal to list Max ziplist entries, use ziplist (compressed list), and if it is greater than list Max ziplistentries, use list.
list-max-ziplist-entries 512
# If the value size is less than or equal to list Max ziplist value, use ziplist, and if it is greater than list Max ziplist value, use list.
list-max-ziplist-value 64

# iniset is used for data less than or equal to set Max intset entries, and set is used for data greater than set Max intset entries.
set-max-intset-entries 512

# Ziplist is used for data less than or equal to zset Max ziplist entries, and zset is used for data greater than zset Max ziplist entries.
zset-max-ziplist-entries 128
# If the value size is less than or equal to zset Max ziplist value, use ziplist, and if it is greater than zset Max ziplist value, use zset.
zset-max-ziplist-value 64

# The hyperlog key, an algorithm for cardinality statistics, only needs 12 KB of memory to calculate the cardinality of nearly 2 ^ 64 different elements
# Set the byte limit of HyeperLogLog. This value is usually between 0 and 15000. The default value is 3000, which basically does not exceed 16000. Value size less than or equal to HLL spark Max bytes uses sparse data structure (spark), and greater than HLL spark maxbytes uses dense data structure (dense). A value larger than 16000 is almost useless. The recommended value is about 3000. If the CPU requirement is not high and the space requirement is high, it is recommended to set it to about 10000.
hll-sparse-max-bytes 3000

# Reset the hash. Redis will use 1ms CPU time every 100ms to rehash the hash table of redis, which can reduce the use of memory. In your usage scenario, if you have very strict real-time requirements and cannot accept the 2-millisecond delay of redis's request from time to time, configure this to No. If there is no such strict real-time requirement, it can be set to yes to release memory as soon as possible.
activerehashing yes

# The size of the Redis server's output (that is, the return value of the command) is usually uncontrollable. It is possible that a simple command can produce a large volume of return data. In addition, it is also possible that too many commands are executed, resulting in the rate of generating returned data exceeding the rate of sending to the client. This will also cause the server to accumulate a large number of messages, resulting in an increasing output buffer, occupying too much memory, and even causing the system to crash.
# Used to force the disconnection of a client that cannot read data from the server fast enough for some reason.
#For normal clients, include monitor. The first 0 means to cancel the hard limit, the second 0 and the third 0 means to cancel the soft limit. The normal client cancels the limit by default, because they will not receive data without asking.
client-output-buffer-limit normal 0 0 0
#For slave client and MONITER client, if the client output buffer exceeds 256mb or 64mb for 60 seconds, the server will immediately disconnect the client.
client-output-buffer-limit slave 256mb 64mb 60
#For pubsub client, if the client output buffer exceeds 32mb or 8mb for 60 seconds, the server will immediately disconnect the client.
client-output-buffer-limit pubsub 32mb 8mb 60

# How often redis performs tasks
hz 10

# Whether to adopt the incremental "file synchronization" policy during aof rewrite. The default value is "yes" and must be yes
# During rewriting, file synchronization is performed every 32M of data, which can reduce the number of "aof large file" writes to the disk
aof-rewrite-incremental-fsync yes

Usually, the default configuration is enough for you to solve the problem!

No special requirements, do not change the configuration!

3.7 Jedis

API client for dealing with java and redis

<dependency>
    <groupId>redis.clients</groupId>
    <artifactId>jedis</artifactId>
    <version>3.1.0</version>
</dependency>
3.7.1 connecting redis
/**
 * @auther wei
 * @date 2021/9/24 13:48
 * @description Test connection redis
 */
public class Test1 {

    public static void main(String[] args) {
        Jedis jedis = new Jedis("192.168.44.129",6379);
        String pong = jedis.ping();
        System.out.println("pong = " + pong);
    }
}

// Before operation:
// 1. Close the firewall systemctl stop firewalld.service
// 2. Modify redis.conf [bind 0.0.0.0] to allow any ip access, and start the redis service with this redis.conf (restart redis)
// 	redis-server /opt/redis5.0.4/redis.conf
3.7.2 common API
/**
 * @auther wei
 * @date 2021/9/24 13:59
 * @description Common API
 */
public class Test2_API {

    private void testString(){
        Jedis jedis = new Jedis("192.168.44.129",6379);
        // String
        jedis.set("k1","v1");
        jedis.set("k2","v2");
        jedis.set("k3","v3");

        Set<String> set = jedis.keys("*");
        Iterator<String> iterator = set.iterator();
        for (set.iterator();iterator.hasNext();){
            String k = iterator.next();
            System.out.println(k+"->"+jedis.get(k));
        }

        Boolean k2Exists = jedis.exists("k2");  // Check whether k2 exists
        System.out.println("k2Exists = " + k2Exists);
        System.out.println(jedis.ttl("k1"));    // View the expiration time of k1

        //jedis.mset("k4","v4","k5","v5");
        System.out.println(jedis.mget("k1","k2","k3","k4","k5"));
        System.out.println("-------------------------------------------------------");
    }

    private void testList(){
        Jedis jedis = new Jedis("192.168.44.129",6379);

        // list
        //jedis.lpush("list01","l1","l2","l3","l4","l5");
        List<String> list01 = jedis.lrange("list01", 0, -1);
        for (String s : list01) {
            System.out.println(s);
        }
        System.out.println("-------------------------------------------------------");
    }

    private void testSet(){
        Jedis jedis = new Jedis("192.168.44.129",6379);

        // set
        jedis.sadd("order","jd001");
        jedis.sadd("order","jd002");
        jedis.sadd("order","jd003");
        Set<String> order = jedis.smembers("order");
        Iterator<String> order_iterator = order.iterator();
        while(order_iterator.hasNext()){
            String s = order_iterator.next();
            System.out.println(s);
        }
        jedis.srem("order","jd002");
        System.out.println(jedis.smembers("order").size());
    }

    private void testHash(){
        Jedis jedis = new Jedis("192.168.44.129",6379);

        jedis.hset("user1","username","james");
        System.out.println(jedis.hget("user1","username"));

        HashMap<String, String> map = new HashMap<>();
        map.put("username","tom");
        map.put("gender","boy");
        map.put("address","beijing");
        map.put("phone","15152037019");

        jedis.hmset("user2",map);
        List<String> list = jedis.hmget("user2", "username", "phone");
        for (String s : list) {
            System.out.println(s);
        }
    }

    private void testZset(){
        Jedis jedis = new Jedis("192.168.44.129",6379);

        jedis.zadd("zset01",60d,"zs1");
        jedis.zadd("zset01",70d,"zs2");
        jedis.zadd("zset01",80d,"zs3");
        jedis.zadd("zset01",90d,"zs4");

        Set<String> zset01 = jedis.zrange("zset01", 0, -1);
        Iterator<String> iterator = zset01.iterator();
        while(iterator.hasNext()){
            String s = iterator.next();
            System.out.println(s);
        }
    }

    public static void main(String[] args) {
        Test2_API api = new Test2_API();
        //api.testString();   //  Test String
        //api.testList();     //  Test list
        //api.testSet();      //  Test list
        //api.testHash();     //  Test hash
        api.testZset();      // Test zset
    }
}
3.7.3 affairs

Initialize balances and expenses

set yue 100
set zhichu 0
/**
 * @auther wei
 * @date 2021/9/24 14:42
 * @description Test transaction
 */
public class TestTransaction {

    public static void main(String[] args) throws InterruptedException {
        Jedis jedis = new Jedis("192.168.44.129", 6379);

        int yue = Integer.parseInt(jedis.get("yue"));
        int zhichu = 10;

        jedis.watch("yue"); // Monitor balance
        Thread.sleep(5000);     // Analog network delay

        if (yue < zhichu){
            jedis.unwatch();    // Release monitoring
            System.out.println("Sorry, your credit is running low!");
        }else {
            Transaction transaction = jedis.multi();    // Open transaction
            transaction.decrBy("yue",zhichu);      // Decrease in balance
            transaction.incrBy("zhichu",zhichu);   // Cumulative consumption increase
            transaction.exec();
            System.out.println("Balance:" + jedis.get("yue"));
            System.out.println("Accumulated expenditure:" + jedis.get("zhichu"));
        }
    }
}

Simulate network delay: within 10 seconds, enter linux and modify the balance to 5. In this way, if the balance < expenditure will enter if

3.7.4 JedisPool

Connection pool technology of redis

Details: https://help.aliyun.com/document_detail/98726.html

<dependency>
    <groupId>commons-pool</groupId>
    <artifactId>commons-pool</artifactId>
    <version>1.6</version>
</dependency>

Optimize using singleton mode

/**
 * @auther wei
 * @date 2021/9/24 14:55
 * @description Optimizing jedis connection pool in singleton mode
 */
public class JdeisPoolUtil {

    private JdeisPoolUtil(){}

    private volatile static JedisPool jedisPool = null;
    private volatile static Jedis jedis = null;

    // Returns a connection pool
    private static JedisPool getInstance(){
        // Double layer detection lock (used very frequently in enterprises)
        if (jedisPool == null){ // Layer 1: temperature detection
            synchronized (JdeisPoolUtil.class){ // Queue in
                if (jedisPool == null){ // Layer 2: check health code
                    JedisPoolConfig config = new JedisPoolConfig();
                    config.setMaxTotal(1000);   // Maximum number of connections in the resource pool
                    config.setMaxIdle(30);      // Maximum number of free connections allowed for the resource pool
                    config.setMaxWaitMillis(60*1000);   // When the resource pool connection is exhausted, the maximum waiting time of the caller (milliseconds).
                    config.setTestOnBorrow(true);       // Whether to check the connection validity when borrowing a connection from the resource pool (it is recommended to set it to false when the traffic is large to reduce the overhead of a ping)
                    jedisPool = new JedisPool(config, "192.168.44.129", 6379);
                }
            }
        }
        return jedisPool;
    }

    // Return jedis object
    public static Jedis getJedis(){
        if (jedis == null){
            jedis = getInstance().getResource();
        }
        return jedis;
    }
}

Test class

/**
 * @auther wei
 * @date 2021/9/24 15:05
 * @description Test jedis connection pool
 */
public class Test_JedisPool {

    public static void main(String[] args) {
        Jedis jedis1 = JdeisPoolUtil.getJedis();
        Jedis jedis2 = JdeisPoolUtil.getJedis();

        System.out.println(jedis1 == jedis2);
    }
}

3.8 distributed locks under high concurrency

Classic cases: second kill, snapping up coupons, etc

3.8.1 build the project and test the single thread

pom.xml

<packaging>war</packaging>

<properties>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <maven.compiler.source>11</maven.compiler.source>
    <maven.compiler.target>11</maven.compiler.target>
</properties>

<dependencies>
    <dependency>
        <groupId>org.springframework</groupId>
        <artifactId>spring-webmvc</artifactId>
        <version>5.2.7.RELEASE</version>
    </dependency>
    
    <!--Tool class for implementing distributed locks-->
    <dependency>
        <groupId>org.redisson</groupId>
        <artifactId>redisson</artifactId>
        <version>3.6.1</version>
    </dependency>
    
    <!--spring operation redis Tool class for-->
    <dependency>
        <groupId>org.springframework.data</groupId>
        <artifactId>spring-data-redis</artifactId>
        <version>2.3.2.RELEASE</version>
    </dependency>
    
    <!--redis client-->
    <dependency>
        <groupId>redis.clients</groupId>
        <artifactId>jedis</artifactId>
        <version>3.1.0</version>
    </dependency>
    
    <!--json Parsing tool-->
    <dependency>
        <groupId>com.fasterxml.jackson.core</groupId>
        <artifactId>jackson-databind</artifactId>
        <version>2.9.8</version>
    </dependency>
</dependencies>
<build>
    <plugins>
        <plugin>
            <groupId>org.apache.tomcat.maven</groupId>
            <artifactId>tomcat7-maven-plugin</artifactId>
            <configuration>
                <port>8001</port>
                <path>/</path>
            </configuration>
            <executions>
                <execution>
                    <!-- After packaging,Running services -->
                    <phase>package</phase>
                    <goals>
                        <goal>run</goal>
                    </goals>
                </execution>
            </executions>
        </plugin>
    </plugins>
</build>

web.xml

<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xmlns="http://xmlns.jcp.org/xml/ns/javaee"
         xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee
                             http://xmlns.jcp.org/xml/ns/javaee/web-app_3_1.xsd"
         id="WebApp_ID" version="3.1">
    <servlet>
        <servlet-name>springmvc</servlet-name>
        <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
        <init-param>
            <param-name>contextConfigLocation</param-name>
            <param-value>classpath:spring/spring.xml</param-value>
        </init-param>
    </servlet>
    <servlet-mapping>
        <servlet-name>springmvc</servlet-name>
        <url-pattern>/</url-pattern>
    </servlet-mapping>
</web-app>

spring-dao.xml

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:context="http://www.springframework.org/schema/context"
       xsi:schemaLocation="
       	http://www.springframework.org/schema/beans
        http://www.springframework.org/schema/beans/spring-beans.xsd
        http://www.springframework.org/schema/context
        http://www.springframework.org/schema/context/spring-context.xsd">

    <context:component-scan base-package="controller"/>

    <!--spring For connection redis,Template tool class provided-->
    <bean id="stringRedisTemplate" class="org.springframework.data.redis.core.StringRedisTemplate">
        <property name="connectionFactory" ref="connectionFactory"></property>
    </bean>

    <bean id="connectionFactory" class="org.springframework.data.redis.connection.jedis.JedisConnectionFactory">
        <property name="hostName" value="192.168.44.129"></property>
        <property name="port" value="6379"/>
    </bean>
</beans>

Test class

/**
 * @auther wei
 * @date 2021/9/24 15:15
 * @description Test second kill
 */
@Controller
public class TestKill {

    @Autowired
    private StringRedisTemplate stringRedisTemplate;

    @RequestMapping("/kill")
    // Only one Tomcat concurrency problem can be solved: the synchronized lock refers to thread concurrency under one process. If multiple processes are concurrent in a distributed environment, this scheme will fail!
    public @ResponseBody synchronized String kill(){
        // 1. Obtain the phone inventory from redis
        int phoneCount = Integer.parseInt(stringRedisTemplate.opsForValue().get("phone"));

        // 2. Judge whether the number of mobile phones is enough
        if (phoneCount > 0){
            phoneCount--;
            // After the inventory is reduced, return the inventory value to redis
            stringRedisTemplate.opsForValue().set("phone",phoneCount+"");
            System.out.println("stock-1,surplus:" + phoneCount);
        }else {
            System.out.println("Insufficient inventory!");
        }
        return "over";
    }

}
3.8.2 high concurrency test

1. Start two projects with port numbers 8001 and 8002 respectively

2. Use nginx for load balancing

upstream wei{
    server 192.168.44.1:8001;
    server 192.168.44.1:8002;
}
server {
    listen 80;
    server_name localhost;
    #charset koi8-r;
    #access_log logs/host.access.log main;
    location / {
        proxy_pass http://wei;
        root html;
        index index.html index.htm;
    }
/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf

3. Use JMeter to simulate 100 http requests within 1 second, and you will find that the same commodity will be snapped up by two servers at the same time!

3.8.3 idea of realizing distributed lock

1. Because redis is single threaded, the command is atomic. Use the setnx command to implement the lock and save the k-v

If k does not exist, save (lock the current thread). After execution, delete k to release the lock

If k already exists, the thread execution is blocked, indicating that there is a lock

2. If the locking is successful, an exception occurs during the execution of the business code, resulting in the failure to delete k (failed to release the lock), it will cause a deadlock (all subsequent threads cannot execute)!

Set the expiration time. For example, redis will automatically delete after 10 seconds

3. High parallel delivery. Due to time period and other factors, the server pressure is too high or too low, and the execution time of each thread is different

The first thread takes 13 seconds to execute. When it reaches the 10th second, redis automatically expires k (release the lock)

The second thread takes 7 seconds to execute, locks it, and executes it for 3 seconds (the lock is released. Why is it released by the last active deleteKey of the first thread)

​ . . . The chain reaction is that the lock just added by the current thread is released by other threads, which repeats over and over again, resulting in permanent failure of the lock

4. Add a unique identifier to each thread, and the UUID is generated randomly. Judge whether it is the current identifier when releasing

5. The problem comes again. If the expiration time is set?

What if 10 seconds is too short to use?

Setting 60 seconds is too long and a waste of time

You can start a timer thread. When the expiration time is less than 1 / 3 of the total expiration time, increase the total expiration time (eat Xiandan to continue life!)

It's too difficult to realize distributed lock by yourself!

3.8.4 Redisson

Redis is one of the most popular NoSQL database solutions, and Java is one of the most popular programming languages in the world.

Although the two seem to "work" together naturally, you should know that Redis does not provide native support for Java.

On the contrary, as Java developers, if we want to integrate Redis in the program, we must use Redis's third-party library.

Redisson is a library for operating Redis in Java programs, which makes it easy for us to use Redis in programs.

Based on the common interfaces in java.util, Redisson provides us with a series of tool classes with distributed characteristics.

/**
 * @auther wei
 * @date 2021/9/24 15:15
 * @description Test second kill
 */
@Controller
public class TestKill {

    @Autowired
    private Redisson redisson;

    @Autowired
    private StringRedisTemplate stringRedisTemplate;

    @RequestMapping("/kill")
    // Only one Tomcat concurrency problem can be solved: the synchronized lock refers to thread concurrency under one process. If multiple processes are concurrent in a distributed environment, this scheme will fail!
    public @ResponseBody synchronized String kill(){

        // Define product id
        String productKey = "HUAWEI-P40";
        // Obtain the lock through lock = reisson
        RLock rLock = redisson.getLock(productKey); // The underlying source code integrates setnx and expiration time operation

        // Lock (expiration time is 30 seconds)
        rLock.lock(30, TimeUnit.SECONDS);

        try {
            // 1. Obtain the phone inventory from redis
            int phoneCount = Integer.parseInt(stringRedisTemplate.opsForValue().get("phone"));

            // 2. Judge whether the number of mobile phones is enough
            if (phoneCount > 0){
                phoneCount--;
                // After the inventory is reduced, return the inventory value to redis
                stringRedisTemplate.opsForValue().set("phone",phoneCount+"");
                System.out.println("stock-1,surplus:" + phoneCount);
            }else {
                System.out.println("Insufficient inventory!");
            }
        } catch (Exception e) {
            e.printStackTrace();
        } finally {
            rLock.unlock();
        }

        return "over";
    }

    @Bean
    public Redisson redisson(){
        Config config = new Config();
        // Using a single redis server
        config.useSingleServer().setAddress("redis://192.168.44.129:6379").setDatabase(0);
        // Use cluster redis server
        //config.useClusterServers().setScanInterval(2000).addNodeAddress("redis://192.168.44.129:6379","redis://192.168.44.130:6379","redis://192.168.44.131:6379");
        return (Redisson) Redisson.create(config);
    }

}

In fact, there are many schemes to implement distributed locks. The zookeeper we used before is characterized by high reliability, and the redis we use now is characterized by high performance.

At present, Redis is still the most widely used distributed lock

Distributed file system FastDFS

1. Scenario overview

Tmall, Taobao and other shopping websites have too many pictures, videos and files. How to store them?

How to ensure the download speed when there is a large number of user visits? Distributed file system is to solve these problems!

1.1 what is a file system

How is file data stored??

[the external chain image transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-xpy8ledh-163305745123) (E: \ markdown \ hook notes \ FastDFS what is a file system)]

1.2 distributed file system

A computer has limited storage capacity and concurrent throughput. How to improve performance?

One ton of goods, I want to transport to Turpan:

I can't imagine it

50 people, too difficult;

500 people, everyone is very relaxed;

Is this distributed?

A: there are two concepts: cluster and distributed. Don't confuse them. They are classic questions often asked in interviews

Distributed: different business modules are deployed on different servers, or the same business module splits multiple sub services and deploys them on different servers. Solve the problem of high concurrency;

Cluster: the same service is deployed on multiple servers to improve the high availability of the system

For example:

There was only one cook in the small restaurant, who cut vegetables, washed vegetables and prepared materials with one hand. With more and more guests, one chef can't be busy, so he can only hire another chef. Both chefs can cook, that is, the role of two chefs is the same. In this way, the relationship between two chefs is "cluster";

In order to let the chef concentrate on cooking and stir fry the dishes to the extreme, he invited the garnish teacher to be responsible for cutting and preparing materials. The relationship between chef and cook is "distributed";

A cook is too busy to provide two ingredients to two chefs, and another cook is invited. The relationship between the two cooks is "cluster".

1.3 mainstream distributed file system

1.3.1 HDFS

(Hadoop Distributed File System)Hadoop distributed file system;

High error tolerance system, suitable for deployment to cheap machines;

It can provide high-throughput data access, which is very suitable for large-scale data applications;

HDFS adopts master-slave structure. An HDFS is composed of a name node and N data nodes;

The name node stores metadata. A file is divided into N copies and stored on different data nodes.

1.3.2 GFS

Google File System

Scalable distributed file system for large, distributed applications that access a large amount of data;

It runs on cheap common hardware and can provide fault tolerance function;

It can provide services with high overall performance to a large number of users;

GFS adopts master-slave structure. A GFS cluster is composed of a master and a large number of chunkserver s;

A file is divided into several blocks and stored in multiple partitioned server s

1.3.3.FastDFS

Compiled and open source by Yu Qing, a senior architect of Taobao;

It is specially tailored for the Internet, fully considers redundant backup, load balancing, linear capacity expansion and other mechanisms, and pays attention to high availability, high performance and other indicators. It is easy to build a set of high-performance file server cluster using FastDFS to provide file upload, download and other services;

HDFS and GFS are general file systems. Their advantage is good development experience, but the complexity of the system is high and the performance is average;

In contrast, the dedicated distributed file system has poor experience, but low complexity and high performance. fastDFS is especially suitable for small files such as pictures and small videos. Because fastDFS does not divide files, there is no overhead of file merging;

socket for network communication, fast.

1.4 working principle

fastDFS includes Tracker Server and Storage Server;

The client requests the Tracker Server to upload and download files;

The Tracker Server schedules the Storage Server to finally complete the upload and download

[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-qcz2vp8k-163305745125) (E: \ markdown \ hook notes \ fastDFS working principle)]

Tracker

Its function is load balancing and scheduling. It manages the Storage Server, which can be understood as "big housekeeper, tracker and dispatcher";

The Tracker Server can be clustered to achieve high availability. The policy is "polling".

Storage

The function is to store files. The files uploaded by the client are finally stored on the storage server;

storage clusters are grouped. Each server in the same group has an equal relationship and data synchronization. The purpose is to achieve data backup and high availability, while servers in different groups do not communicate;

If the storage capacity of each server in the same group is inconsistent, the server with the smallest capacity will be selected, so the software and hardware of servers in the same group should be consistent.

The Storage Server will connect to all tracker servers in the cluster and regularly report their status to them, such as remaining space, file synchronization, file upload and download times, etc.

1.5 upload / download principle

[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-e52xnrg9-163305745126) (E: \ markdown \ hook notes \ FastDFS upload principle)]

[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-lb6jn0ph-163305745129) (E: \ markdown \ hook notes \ FastDFS download principle)]

After the client uploads the file, storage will return the file id to the client

group1/M00/02/11/aJxAeF21O5wAAAAAAAAGaEIOA12345.sh

Group name: the name of the storage group after the file is uploaded. After the file is uploaded successfully, it is returned by storage and needs to be saved by the client.

Virtual disk path:

The virtual path of storage configuration, in the disk option storage_path corresponds to.

​ storage_path0 corresponds to M00,

​ storage_path1 corresponds to M01,

Data two-level directory: the directory created by storage under the virtual disk.

File name: different from uploading, it is generated by storage according to specific information, including: ip of storage server, creation timestamp, size, suffix, etc

2. Upload and download of fastdfs

2.1 installation

2.1.1 install gcc (required at compile time)
yum install -y gcc gcc-c++
2.1.2 installing libevent (runtime requirements)
yum -y install libevent
2.1.3 installing libfastcommon

libfastcommon is officially provided by FastDFS. libfastcommon contains some basic libraries required for the operation of FastDFS.

1. Upload libfastcommon-master.zip to / opt

Installation decompression zip Package command: yum install -y unzip
 Unzip package: unzip libfastcommon.zip
 Enter directory: cd libfastcommon-master

2. Compilation

./make.sh

If: make.sh has insufficient permissions, authorization (executable rights) is required

chmod 777 make.sh

3. Installation

./make.sh install

After libfastcommon is installed, the libfastcommon.so library file will be generated in the / usr/lib64 directory

4. Copy library files

cd /usr/lib64
cp libfastcommon.so /usr/lib
2.1.4 installing Tracker

1. Download FastDFS_v5.05.tar.gz and upload to / opt

tar -zxvf FastDFS_v5.05.tar.gz
cd FastDFS
./make.sh
./make.sh install

2. The installation is successful. Copy the files under conf in the installation directory to / etc/fdfs /

cp /opt/FastDFS/conf/* /etc/fdfs/

2.2 configuration

1.Tracker configuration

vim /etc/fdfs/tracker.conf
#Port number
port=22122
#Basic directory (the management data of storage will be stored in this directory when Tracker runs) (if the basic directory does not exist, you need to create it yourself
mkdir /home/fastdfs)
base_path=/home/fastdfs

2.Storage configuration

vim /etc/fdfs/storage.conf
#Configuration group name
group_name=group1
#port
port=23000
#Heartbeat interval to tracker (seconds)
heart_beat_interval=30
#storage base directory
#The directory does not exist. You need to create it yourself
base_path=/home/fastdfs
#Store where files are stored (store_path)
#You can understand one path for one disk, multiple disks, and multiple store_path s
#The fdfs_storage directory does not exist. You need to create it yourself
#mkdir /home/fastdfs/fdfs_storage
store_path0=/home/fastdfs/fdfs_storage
#If there are multiple mounted disks, define multiple store_path s as follows
#store_path1=..... (M01)
#store_path2=..... (M02)

#Configure tracker server: IP
tracker_server=192.168.44.129:22122
#If there are multiple, configure multiple tracker s
#tracker_server=192.168.44.x:22122

2.3 start up service

1. Start the tracker

/usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf restart

2. Start storage

/usr/bin/fdfs_storaged /etc/fdfs/storage.conf restart

3. View all running ports:

netstat -ntlp

2.4 building Java project

Create maven project using IDEA

2.4.1 pom.xml
<!--fastdfs of java client-->
<dependency>
    <groupId>net.oschina.zcx7878</groupId>
    <artifactId>fastdfs-client-java</artifactId>
    <version>1.27.0.0</version>
</dependency>
<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-io</artifactId>
    <version>1.3.2</version>
</dependency>
2.4.2 creating a profile

Create the config directory under resources and fastdfs-client.properties under the config directory, as follows:

##fastdfs-client.properties
fastdfs.connect_timeout_in_seconds = 5
fastdfs.network_timeout_in_seconds = 30
fastdfs.charset = UTF-8
fastdfs.http_anti_steal_token = false
fastdfs.http_secret_key = FastDFS1234567890
fastdfs.http_tracker_http_port = 80
fastdfs.tracker_servers = 192.168.44.129:22122
2.4.3 file upload
/**
 * @auther wei
 * @date 2021/9/25 15:54
 * @description File upload
 */
public class TestUpload {

    public static void main(String[] args) {

        try {
            // 1. Load configuration file
            ClientGlobal.initByProperties("config/fastdfs-client.properties");

            // 2. Create a tracker client
            TrackerClient trackerClient = new TrackerClient();

            // 3. Get the connection service of the tracker through the tracker client and return it
            TrackerServer trackerServer = trackerClient.getConnection();

            // 4. Declare storage service
            StorageServer storageServer = null;

            // 5. Define storage client
            StorageClient1 client = new StorageClient1(trackerServer, storageServer);

            // 6. Define file meta information
            NameValuePair[] list = new NameValuePair[1];
            list[0] = new NameValuePair("fileName","1.jpg");

            // 7. Upload
            String fileID = client.upload_file1("G:\\1.jpg", "jpg", list);
            System.out.println("fileID = " + fileID);
            // group1/M00/00/00/wKgsgWFO2OmAEE5XAExtg1rxSVE472.jpg
            /**
             * group1: A server is a group
             * M00: storage_path0 ----> /home/fastdfs/fdfs_storage/data
             * 00/00: Two level data directory
             */

            // Shut down service
            trackerServer.close();
        } catch (Exception e) {
            e.printStackTrace();
        }

    }
}
2.4.4 document query
/**
 * @auther wei
 * @date 2021/9/25 16:12
 * @description File query
 */
public class TestQuery {

    public static void main(String[] args) {

        try {
            // 1. Load configuration file
            ClientGlobal.initByProperties("config/fastdfs-client.properties");

            // 2. Create a tracker client
            TrackerClient trackerClient = new TrackerClient();

            // 3. Get the connection service of the tracker through the tracker client and return it
            TrackerServer trackerServer = trackerClient.getConnection();

            // 4. Declare storage service
            StorageServer storageServer = null;

            // 5. Define storage client
            StorageClient1 client = new StorageClient1(trackerServer, storageServer);

            // 6. Query
            FileInfo fileInfo = client.query_file_info1("group1/M00/00/00/wKgsgWFO2OmAEE5XAExtg1rxSVE472.jpg");
            //System.out.println(fileInfo);

            if (fileInfo != null){
                System.out.println("fileInfo = " + fileInfo);
            }else {
                System.out.println("No such file found!");
            }

            trackerServer.close();

        } catch (Exception e) {
            e.printStackTrace();
        }
    }
}
2.4.5 file download
/**
 * @auther wei
 * @date 2021/9/25 16:17
 * @description File download
 */
public class TestDownload {

    public static void main(String[] args) {

        try {
            // 1. Load configuration file
            ClientGlobal.initByProperties("config/fastdfs-client.properties");

            // 2. Create a tracker client
            TrackerClient trackerClient = new TrackerClient();

            // 3. Get the connection service of the tracker through the tracker client and return it
            TrackerServer trackerServer = trackerClient.getConnection();

            // 4. Declare storage service
            StorageServer storageServer = null;

            // 5. Define storage client
            StorageClient1 client = new StorageClient1(trackerServer, storageServer);

            // 6. Download
            byte[] bytes = client.download_file1("group1/M00/00/00/wKgsgWFO2OmAEE5XAExtg1rxSVE472.jpg");

            // 7. Convert byte array into file through IO stream
            FileOutputStream fileOutputStream = new FileOutputStream(new File("G:/xxxxx.jpg"));
            fileOutputStream.write(bytes);
            fileOutputStream.close();

            trackerServer.close();
            System.out.println("Download complete!");
        } catch (Exception e) {
            e.printStackTrace();
        }
    }
}

3. Project practice

Master the use of fastDFS in real projects;

Master fastDFS to realize image server;

3.1 build a picture server

3.1.1 Nginx module installation (Storage)
  1. Upload fastdfs nginx module_v1.16.tar.gz to / opt
  2. Unzip nginx module
tar -zxvf fastdfs-nginx-module_v1.16.tar.gz
  1. Modify the config file and change the path of / usr/local / in the file to / usr/
cd /opt/fastdfs-nginx-module/src
vim config
  1. Copy mod_fastdfs.conf under fastdfs nginx module / SRC to / etc/fdfs
cp mod_fastdfs.conf /etc/fdfs/
  1. Modify / etc/fdfs/mod_fastdfs.conf
vim /etc/fdfs/mod_fastdfs.conf
base_path=/home/fastdfs
tracker_server=192.168.44.129:22122
#(n tracker s are configured in n rows)
#tracker_server=192.168.44.x:22122
#The url contains the group name
url_have_group_name=true
#Specify the file storage path (the store path configured above)
store_path0=/home/fastdfs/fdfs_storage
  1. Copy libfdfsclient.so to / usr/lib
cp /usr/lib64/libfdfsclient.so /usr/lib/
  1. Create nginx/client directory
mkdir -p /var/temp/nginx/client
3.1.2 Nginx installation (Tracker)
  1. Upload nginx-1.14.0.tar.gz to / opt (nginx has been installed, this step is omitted)
  2. Unzip: tar -zxvf nginx-1.14.0.tar.gz (nginx has been installed, this step is omitted)
  3. Install dependent libraries (nginx has been installed, and this step is omitted)
yum install pcre
yum install pcre-devel
yum install zlib
yum install zlib-devel
yum install openssl
yum install openssl-devel
  1. Enter the directory of nginx decompression cd /opt/nginx-1.14.0
  2. install
./configure \
--prefix=/usr/local/nginx \
--pid-path=/var/run/nginx/nginx.pid \
--lock-path=/var/lock/nginx.lock \
--error-log-path=/var/log/nginx/error.log \
--http-log-path=/var/log/nginx/access.log \
--with-http_gzip_static_module \
--http-client-body-temp-path=/var/temp/nginx/client \
--http-proxy-temp-path=/var/temp/nginx/proxy \
--http-fastcgi-temp-path=/var/temp/nginx/fastcgi \
--http-uwsgi-temp-path=/var/temp/nginx/uwsgi \
--http-scgi-temp-path=/var/temp/nginx/scgi \
--add-module=/opt/fastdfs-nginx-module/src

**Note: * * the temporary file directory is specified as / var/temp/nginx above. You need to create temp and nginx directories under / var: mkdir /var/temp/nginx

  1. Compiling: make
  2. Install: make install
  3. Copy profile
cd /opt/FastDFS/conf
cp http.conf mime.types /etc/fdfs/
Overwrite: yes
  1. Modify nginx configuration file
cd /usr/local/nginx/conf/
vim nginx.conf
server {
    listen 80;
    server_name 192.168.44.129;
    #charset koi8-r;
    #access_log logs/host.access.log main;
    location /group1/M00 {
        root /home/fastdfs/fdfs_storage/data;
        ngx_fastdfs_module;
    }
}
  1. Close nginx and start nginx
pkill -9 nginx
/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf
  1. Visit nginx and view pictures

http://192.168.44.129

http://192.168.44.129/group1/M00/00/00/wKgsgWFO2OmAEE5XAExtg1rxSVE472.jpg

3.2 create front page

<%@ page contentType="text/html;charset=UTF-8" language="java" %>
<html>
<head>
    <title>Upload pictures</title>
</head>
<body>

    <%--Upload a file. Compared with the text, the file has a large content and must be used post Method of submission--%>
    <%--Uploading files is different from ordinary text, action Receiving parameters will also be treated differently, so the form with file submission is declared as "multi part form"--%>
    <form action="upload" method="post" enctype="multipart/form-data">
        
        <input type="file" name="fname"><br>
        <button>Submit</button>
        
    </form>

</body>
</html>

3.3 building web Services

3.3.1 pom.xml
<packaging>war</packaging>

<properties>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <maven.compiler.source>11</maven.compiler.source>
    <maven.compiler.target>11</maven.compiler.target>
</properties>

<dependencies>
    <!-- Because there are jsp Page, so reference servlet rely on-->
    <dependency>
        <groupId>javax.servlet</groupId>
        <artifactId>servlet-api</artifactId>
        <scope>provided</scope>
        <version>2.5</version>
    </dependency>

    <!-- Page submitted requests, using springmvc To handle-->
    <dependency>
        <groupId>org.springframework</groupId>
        <artifactId>spring-webmvc</artifactId>
        <version>5.2.7.RELEASE</version>
    </dependency>

    <!-- java connect fastDFS Client tools for-->
    <dependency>
        <groupId>net.oschina.zcx7878</groupId>
        <artifactId>fastdfs-client-java</artifactId>
        <version>1.27.0.0</version>
    </dependency>

    <!-- Upload pictures to FastDFS Need to use IO tool-->
    <dependency>
        <groupId>org.apache.commons</groupId>
        <artifactId>commons-io</artifactId>
        <version>1.3.2</version>
    </dependency>

    <!-- Save picture to web Required by the server IO tool-->
    <dependency>
        <groupId>commons-fileupload</groupId>
        <artifactId>commons-fileupload</artifactId>
        <version>1.3.1</version>
    </dependency>

    <!--Used to convert java Object and json String, note, 2.7 The above versions must be matched spring5.0 above-->
    <dependency>
        <groupId>com.fasterxml.jackson.core</groupId>
        <artifactId>jackson-databind</artifactId>
        <version>2.9.8</version>
    </dependency>
</dependencies>

<build>
    <plugins>
        <plugin>
            <groupId>org.apache.tomcat.maven</groupId>
            <artifactId>tomcat7-maven-plugin</artifactId>
            <configuration>
                <port>8001</port>
                <path>/</path>
            </configuration>
            <executions>
                <execution>
                    <phase>package</phase>
                    <goals>
                        <goal>run</goal>
                    </goals>
                </execution>
            </executions>
        </plugin>
    </plugins>
</build>
3.3.2 web.xml
<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xmlns="http://xmlns.jcp.org/xml/ns/javaee"
         xsi:schemaLocation="
         	http://xmlns.jcp.org/xml/ns/javaee
 			http://xmlns.jcp.org/xml/ns/javaee/web-app_3_1.xsd" id="WebApp_ID" version="3.1">
    
    <servlet>
        <servlet-name>springMVC</servlet-name>
        <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
        <init-param>
            <param-name>contextConfigLocation</param-name>
            <param-value>classpath:spring/spring-mvc.xml</param-value>
        </init-param>
    </servlet>
    
    <servlet-mapping>
        <servlet-name>springMVC</servlet-name>
        <url-pattern>/</url-pattern>
    </servlet-mapping>
    
</web-app>
3.3.3 spring-mvc.xml
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:context="http://www.springframework.org/schema/context"
       xmlns:mvc="http://www.springframework.org/schema/mvc"
       xsi:schemaLocation="
        http://www.springframework.org/schema/beans
        http://www.springframework.org/schema/beans/spring-beans.xsd
        http://www.springframework.org/schema/context
        http://www.springframework.org/schema/context/spring-context.xsd
        http://www.springframework.org/schema/mvc
        http://www.springframework.org/schema/mvc/spring-mvc.xsd">

    <!--Scan annotation package-->
    <context:component-scan base-package="controller"/>

    <!--Notes in scan controller: @Response-->
    <mvc:annotation-driven/>

    <!--Parser for uploading files (specify the size limit of uploaded files)-->
    <bean id="multipartResolver" class="org.springframework.web.multipart.commons.CommonsMultipartResolver">
        <!--Limit file maximum: 2 GB-->
        <property name="maxUploadSize" value="2048000000"/>
    </bean>

</beans>
3.3.4 file entity class
public class FileSystem implements Serializable {
    
    private String fileId;
    private String filePath;
    private String fileName;
    
    @Override
    public String toString(){}
    
    setter...
    getter...
}
3.3.5 control layer
/**
 * @auther wei
 * @date 2021/9/25 17:24
 * @description Controller for processing uploaded files
 */
@Controller
public class FileAction {

    /**
     * @param request Request object for multipart form
     * @return  The json object that uploads the file object
     * @throws Exception
     *
     * Upload process:
     * 1.Save the file to the web server first
     * 2.Upload files to FastDFS from the web server
     */
    @RequestMapping("/upload")
    public @ResponseBody FileSystem upload(MultipartHttpServletRequest request) throws Exception{
        // MultipartHttpServletRequest: an enhanced version of httpServletRequest. It can not only contain text information, but also picture information
        FileSystem fileSystem = new FileSystem();

        /* 1.Save the file to the web server */
        // Obtain the uploaded file object from the page request
        MultipartFile file = request.getFile("fname");

        // Gets the original name of the file from the file object
        String oldFileName = file.getOriginalFilename();

        // Obtain the suffix of the file from the original file name by string interception
        String hou = oldFileName.substring(oldFileName.lastIndexOf(".") + 1);

        // To avoid overwriting a file with the same name, a new file name is generated
        String newFileName = UUID.randomUUID().toString() + "." + hou;

        // Create the directory where the web server saves files (create the G:/upload directory in advance, otherwise the system will throw an exception if it cannot find the path)
        File toSaveFile = new File("G:/upload" + newFileName);

        // Convert path to file
        file.transferTo(toSaveFile);

        // Absolute path to the server
        String newFilePath = toSaveFile.getAbsolutePath();

        /* 2.Upload files from web server to FastDFS */
        // Load profile
        ClientGlobal.initByProperties("config/fastdfs-client.properties");

        // Create tracker client
        TrackerClient trackerClient = new TrackerClient();

        //Get the connection service of the tracker through the tracker client and return it
        TrackerServer trackerServer = trackerClient.getConnection();

        // Declare storage service
        StorageServer storageServer = null;

        // Define storage client
        StorageClient1 client = new StorageClient1(trackerServer,storageServer);

        // Define file meta information
        NameValuePair[] list = new NameValuePair[1];
        list[0] = new NameValuePair("fileName",oldFileName);

        // upload
        String fileId = client.upload_file1(newFilePath, hou, list);
        //System.out.println(fileId);
        trackerServer.close();

        // Encapsulating fileSystem objects
        fileSystem.setFileId(fileId);
        fileSystem.setFileName(oldFileName);
        fileSystem.setFilePath(fileId);   // And upload to FastDFS and access the picture through fileId. All fileids are file paths

        return fileSystem;
    }

}
3.3.6 add fastDFS configuration file

Create the config directory under resources and fastdfs-client.properties under config directory

Reference: 2.4.2

3.3.7 start fastDFS service and the test starts
[root@localhost /]# /usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf restart
[root@localhost /]# /usr/bin/fdfs_storaged /etc/fdfs/storage.conf restart
[root@localhost /]# /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf
[root@localhost /]# netstat -ntlp
[root@localhost /]# systemctl stop firewalld.service
[root@localhost /]# cd /home/fastdfs/fdfs_storage/data/
[root@localhost /]# ls

3.4 typical errors

Restart the linux server, and nginx may fail to start:

[root@localhost logs]# /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf
[root@localhost /]# nginx: [emerg] open() "/var/run/nginx/nginx.pid" failed (2:No such file or directory)

The reason for this error is that the path of pid file is not modified, and the configuration file of nginx is edited:

vim /usr/local/nginx/conf/nginx.conf
pid /usr/local/nginx/logs/nginx.pid;

Start nginx again, done!

RabbitMQ

1. What is rabbit MQ

1.1 MQ (Message Queue) Message Queue

Message Queuing Middleware is an important component in distributed system

It mainly solves the problems of asynchronous processing, application decoupling, flow peak clipping and so on

To achieve a high-performance, high availability, scalable and ultimately consistent architecture

More message queue products are used: RabbitMQ, RocketMQ, ActiveMQ, ZeroMQ, Kafka, etc

1.1.1 asynchronous processing

After registering, users need to send verification email and mobile phone verification code;

Write the registration information into the database, send the verification email and send the mobile phone. After all three steps are completed, return to the client

[external link image transfer fails, and the source station may have anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-l8gdbnun-163305745131) (E: \ markdown \ hook notes \ RabbitMQ message queue - asynchronous processing)]

1.1.2 application decoupling

Scenario: the order system needs to notify the inventory system

If the inventory system is abnormal, the order fails to call inventory, resulting in order failure

Reason: the coupling between order system and inventory system is too high

[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-ihrcunk5-163305745134) (E: \ markdown \ hook notes \ RabbitMQ message queue - Application decoupling)]

Order system: after the user places an order, the order system completes the persistence processing, writes the message to the message queue, returns it to the user, and the order is successful;

Inventory system: subscribe to the order information and obtain the order information. The inventory system performs inventory operation according to the order information;

If: when placing an order, the inventory system cannot operate normally and will not affect the order, because after placing an order, the order system writes to the message queue and no longer cares about other subsequent operations, realizing the application decoupling between the order system and the inventory system;

Therefore, message queuing is typical: producer consumer model

Producers constantly produce messages to the message queue, and consumers constantly get messages from the queue

Because the production and consumption of messages are asynchronous, and only care about the sending and receiving of messages without the intrusion of business logic, the decoupling between producers and consumers is realized

1.1.3 flow peak shaving

Rush buying, second kill and other businesses are aimed at high concurrency scenarios

Because the traffic is too large, the surge will cause the application to hang up. To solve this problem, add a message queue at the front end

[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-rqtezmzw-163305745136) (E: \ markdown \ hook notes \ RabbitMQ message queue - traffic peak shaving)]

After receiving the user's request, the server will first write it to the message queue. If it exceeds the length of the queue, it will be discarded and a second kill page will be thrown!

To put it bluntly, the successful second kill is the users who enter the queue;

1.2 introduction to background knowledge

1.2.1 AMQP advanced message queuing protocol

Advanced Message Queuing Protocol is an application layer standard Advanced Message Queuing Protocol that provides unified messaging services

Protocol: rules that must be observed in the process of data transmission

Clients based on this protocol can communicate with message middleware

It is not limited by product, development language and other conditions

1.2.2 JMS

Java Message Server, a specification of Java message service application program interface, is similar to the role played by JDBC

It is an API for message oriented middleware in Java platform, which is used to send messages between two applications or in distributed systems for asynchronous communication

1.2.3 relationship between the two

JMS defines a unified interface and unified message operation; AMQP unifies the data interaction format through the protocol

JMS must be a java language; AMQP is only a protocol and has nothing to do with language

1.2.4 Erlang language

Erlang([' ə: læ ŋ]) It is a general concurrency oriented programming language. It is developed by CSLab under the jurisdiction of Ericsson, a Swedish telecommunications equipment manufacturer, in order to create a programming language and running environment that can deal with large-scale concurrent activities

It was originally designed by Ericsson for communication applications, such as control switch or transformation protocol, so it is very suitable for building distributed, real-time soft parallel computing system

Erlang runtime environment is a virtual machine, a bit like a Java virtual machine, so that once the code is compiled, it can run anywhere

1.3 why RabbitMQ

At the beginning, we said that there are so many message queue products, why choose RabbitMQ?

First look at the naming: Rabbits move very fast and breed very crazy, so Rabbit is used as the name of this distributed software

Erlang development is the best partner of AMQP. It is easy to install and deploy and has low starting threshold

Enterprise level message queue is highly reliable and has been tested by a large number of practices, and a large number of successful application cases, such as Alibaba, Netease and other large front-line manufacturers

There is a powerful WEB management page

Strong community support provides impetus for technological progress

It supports message persistence, message confirmation mechanism, flexible task distribution mechanism, etc., with rich support functions

Cluster expansion is easy, and the performance can be doubled by adding nodes

Conclusion: if you want to use a message queue system with high reliability, powerful functions and easy management, you can choose RabbitMQ. If you want to use a message queue system with high performance but occasionally lose some data, you don't care much. You can use kafka or zeroMQ

The performance of kafka and zeroMQ burst the table, which can definitely beat RabbitMQ!

1.4 functions of rabbitmq components

[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-gwyfcham-163305745137) (E: \ markdown \ hook notes \ RabbitMQ component functions)]

Broker: Message Queuing server entity

Virtual Host: Virtual Host

Identify a batch of switches, message queues and related objects to form a whole

A virtual host is a separate server domain that shares the same authentication and encryption environment

Each vhost is essentially a mini RabbitMQ server with its own queue, switch, binding and permission mechanism

vhost is the basis of AMQP concept. The default vhost of RabbitMQ is /, which must be specified when linking

Exchange: switch (routing)

It is used to receive messages sent by producers and route them to queues in the server

Queue: message queue

Used to save messages until they are sent to consumers.

It is the container of messages and the destination of messages.

A message can be put into one or more queues.

The message is always in the queue, waiting for the consumer to connect to the queue and take it away.

Banding: binding, used for association between message queue and switch.

Channel: Channel

An independent bidirectional data flow channel in a multiplexed connection.

A channel is a virtual link established within a real TCP connection

AMQP commands are sent through the channel. Whether publishing messages, subscribing to queues or receiving messages, they are completed through the channel

Because it is very expensive to establish and destroy TCP connections for the operating system, the concept of channel is introduced to reuse TCP connections.

Connection: a network connection, such as a TCP connection.

Publisher: the producer of messages and a client application that publishes messages to the exchange.

Consumer: the consumer of the message, which represents a client application that gets the message from the message queue.

Message: message

A message is unnamed. It consists of a message header and a message body.

The message body is opaque, while the message header is composed of a series of optional attributes, including routing key, priority, delivery mode, etc.

2. How to use rabbit MQ

To install RabbitMQ, you must first install the erlang language environment. Similar to installing tomcat, you must first install the JDK

View matching versions: https://www.rabbitmq.com/which-erlang.html

2.1 RabbitMQ installation and startup

Erlang Download: https://dl.bintray.com/rabbitmq-erlang/rpm/erlang

Socat Download: http://repo.iotti.biz/CentOS/7/x86_64/socat-1.7.3.2-5.el7.lux.x86_64.rpm

RabbitMQ Download: https://www.rabbitmq.com/install-rpm.html#downloads

2.1.1 installation
[root@localhost opt]# rpm -ivh erlang-21.3.8.16-1.el7.x86_64.rpm
[root@localhost opt]# rpm -ivh socat-1.7.3.2-5.el7.lux.x86_64.rpm
[root@localhost opt]# rpm -ivh rabbitmq-server-3.8.6-1.el7.noarch.rpm
2.1.2 start the background management plug-in
[root@localhost opt]# rabbitmq-plugins enable rabbitmq_management
2.1.3 start RabbitMQ
[root@localhost opt]# systemctl start rabbitmq-server.service
[root@localhost opt]# systemctl status rabbitmq-server.service
[root@localhost opt]# systemctl restart rabbitmq-server.service
[root@localhost opt]# systemctl stop rabbitmq-server.service
2.1.4 viewing process
[root@localhost opt]# ps -ef | grep rabbitmq
2.1.5 testing

1. Close the firewall: systemctl stop firewalld

2. Browser input: http://ip:15672

3. Default account password: guest. The guest user is not allowed to connect remotely by default

  1. Create account
[root@localhost opt]# rabbitmqctl add_user wei 123456
  1. Set user roles
[root@localhost opt]# rabbitmqctl set_user_tags wei administrator
  1. Set user permissions
[root@localhost opt]# rabbitmqctl set_permissions -p "/" wei ".*" ".*" ".*"
  1. View current users and roles
[root@localhost opt]# rabbitmqctl list_users
  1. Modify current user password
[root@localhost opt]# rabbitmqctl change_password wei 123123

4. Introduction to management interface

overview: overview

connections: view links

channels: channel condition

Exchanges: switches (routing). By default, there are four types and seven

Queues: message queue status

Admin: administrator list

Port:

5672: RabbitMQ provides a port for programming language client links

15672: port of RabbitMQ management interface

25672: port of RabbitMQ cluster

2.2 RabbitMQ quick start

2.2.1 dependency
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.wei</groupId>
    <artifactId>lagou-rabbitmq</artifactId>
    <version>1.0-SNAPSHOT</version>

    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <maven.compiler.source>11</maven.compiler.source>
        <maven.compiler.target>11</maven.compiler.target>
    </properties>

    <dependencies>
        <dependency>
            <groupId>com.rabbitmq</groupId>
            <artifactId>amqp-client</artifactId>
            <version>5.7.3</version>
        </dependency>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-log4j12</artifactId>
            <version>1.7.25</version>
            <scope>compile</scope>
        </dependency>
        <dependency>
            <groupId>org.apache.commons</groupId>
            <artifactId>commons-lang3</artifactId>
            <version>3.9</version>
        </dependency>
    </dependencies>

</project>
2.2.2 log dependency log4j (optional)
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %m%n

log4j.appender.file=org.apache.log4j.FileAppender
log4j.appender.file.File=rebbitmq.log
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %l %m%n

log4j.rootLogger=debug, stdout,file
2.2.2 create connection

Create the virtual host first

import com.rabbitmq.client.Connection;
import com.rabbitmq.client.ConnectionFactory;

/**
 * @auther wei
 * @date 2021/9/26 20:45
 * @description Specifically connected to RabbitMQ
 */
public class ConnectionUtil {

    public static Connection getConnection() throws Exception{

        // 1. Create a connection factory
        ConnectionFactory factory = new ConnectionFactory();

        // 2. Set MQ connection information (ip,port,vhost,username,password) in the factory object
        factory.setHost("192.168.44.129");
        factory.setPort(5672);
        factory.setVirtualHost("/lagou");
        factory.setUsername("wei");
        factory.setPassword("123123");

        // 3. Obtain the connection with MQ through the factory
        Connection connection = factory.newConnection();

        return connection;
    }

    public static void main(String[] args) throws Exception {
        Connection connection = getConnection();
        System.out.println("connection = " + connection);
        connection.close();
    }
}

2.3 RabbitMQ mode

RabbitMQ provides six message models, but the sixth is actually RPC, not MQ, so we only study the first five

Online manual: https://www.rabbitmq.com/getstarted.html

Five message models are generally divided into two categories:

1 and 2 are point-to-point

3, 4 and 5 belong to publish subscribe mode (one to many)

Peer to peer mode: P2P (point to point) mode includes three roles:

Message queue, sender, receiver

Each message is sent to a specific queue from which the receiver gets the message

Keep these messages in the queue until they are consumed or time out

Features:

1. Each message has only one consumer. Once consumed, the message will not be in the queue

2. There is no dependency between the sender and the receiver. The completion of the sender's sending will not affect the sending of the message to the queue no matter whether the receiver is running or not (I send you a wechat, whether you look at the mobile phone or not, I send it anyway)

3. After receiving the message successfully, the receiver needs to respond to the object successfully (confirm)

If you want every message sent to be processed successfully, P2P is required

Publish / subscribe mode: publish (Pub) / subscribe (Sub)

The pub/sub mode contains three roles: exchange, publisher, and subscriber

Multiple publishers send messages to the switch, and the system passes these messages to multiple subscribers

Features:

1. Each message can have multiple subscribers

2. There is a time dependency between publishers and subscribers. Subscribers to a switch must create a subscription before consuming the publisher's messages

3. In order to consume messages, subscribers must keep running; Similar to watching live TV.

This mode can be used if the message you want to send is processed by multiple consumers

2.3.1 simple mode

The following is an introduction from the official website:

​ RabbitMQ is a message broker: it accepts and forwards messages. You can think about it as a post office: when you put the mail that you want posting in a post box, you can be sure that Mr. or Ms. Mailperson will eventually deliver the mail to your recipient. In this analogy, RabbitMQ is a post box, a post office and a postman.

RabbitMQ is a message broker: it receives and forwards messages. You can think of it as a post office: when you put the mail you want to send in a mailbox, you can be sure that the postman will eventually send the mail to your recipient. In this analogy, rabbit MQ is a mailbox, a post office, and a postman.

RabbitMQ itself only receives, stores and forwards messages, and does not process information!

Similar to the post office, it should be the recipient rather than the post office that handles letters!

[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-ao0yesdg-163305745139) (E: \ markdown \ hook notes \ RabbitMQ mode - simple mode)]

2.3.1.1 producer P
/**
 * @auther wei
 * @date 2021/9/27 13:50
 * @description Message producer
 */
public class Sender {

    public static void main(String[] args) throws Exception {
        String msg = "wei: Hello,RabbitMQ!";
        
        // 1. Get connected
        Connection connection = ConnectionUtil.getConnection();

        // 2. Create a channel (channel) from the connecting shaft
        Channel channel = connection.createChannel();

        // 3. Create message queue (1,2,3,4,5)
        /**
         * Parameter 1: queue name
         * Parameter 2: whether the data in the queue is persistent
         * Parameter 3: exclusivity (whether extension is supported. The current queue can only be used by itself, not for others)
         * Parameter 4: delete automatically (when the number of connections in the queue is 0, the queue will be destroyed, regardless of whether the queue still saves data)
         * Parameter 5: queue parameter (no parameter is empty)
         */
        channel.queueDeclare("queue1",false,false,false,null);
        
        // 4. Send messages to the specified queue (1,2,3,4)
        /**
         * Parameter 1: switch name (currently, it is a simple mode, that is, P2P mode. There is no switch, and all names are "")
         * Parameter 2: name of destination queue
         * Parameter 3: set the attribute of the message (null if there is no attribute)
         * Parameter 4: message content (only byte array is received)
         */
        channel.basicPublish("","queue1",null,msg.getBytes());
        System.out.println("send out:" + msg);
        
        // 5. Release resources
        channel.close();
        connection.close();
    }
}

After starting the producer, you can go to the management end to view the information in the queue. There will be a message that has not been processed and confirmed

2.3.1.2 consumer C
/**
 * @auther wei
 * @date 2021/9/27 14:01
 * @description Message Receiver 
 */
public class Recer {

    public static void main(String[] args) throws Exception {
        // 1. Get connected
        Connection connection = ConnectionUtil.getConnection();

        // 2. Obtain channel (channel)
        Channel channel = connection.createChannel();

        // 3. Get message from channel
        DefaultConsumer consumer = new DefaultConsumer(channel){
            @Override   // Delivery processing (recipient information, express label on package, protocol configuration, message)
            public void handleDelivery(String consumerTag, Envelope envelope, AMQP.BasicProperties properties, byte[] body) throws IOException {
                // body is the message obtained from the queue
                String s = new String(body);
                System.out.println("receive = " + s);
            }
        };

        // 4. Listening queue true: automatic message confirmation
        channel.basicConsume("queue1",true,consumer);
    }
}

Start the consumer and go to the management end to view the information in the queue. All information has been processed and confirmed, and 0 is displayed

2.3.1.3 message confirmation mechanism ACK

1. As can be seen from the case just now, once a message is consumed, it will be removed from the queue immediately

2. How does rabbitmq know that messages are received by consumers?

After receiving the message, if the consumer throws an abnormal shutdown before performing the operation, resulting in consumption failure, but RabbitMQ has no way to know, so the message is lost

Therefore, RabbitMQ has an ACK mechanism. When the consumer obtains the message, it will send a receipt ack to RabbitMQ to inform him that the message has been received

ACK: (knowledge character) is the confirmation character. In data communication, a transmission control character sent by the receiving station to the transmitting station. It indicates that the sent data has been confirmed to be received correctly. When we use the http request, the http status code 200 tells us that the server has executed successfully

The whole process is like the courier to deliver the package to you, and need your signature and photo receipt

However, this receipt ACK can be divided into two cases:

Automatic ack: after receiving the message, the consumer will automatically send an ACK (the express is put in the express cabinet)

Manual ack: after receiving the message, the ACK will not be sent and needs to be called manually (the express must sign for it in person)

How to choose between the two situations depends on the importance of the message:

If the message is not important and the loss has no impact, automatic ACK will be more convenient

If the message is very important, it is best to consume the manual ACK. If the automatic ACK is consumed, RabbitMQ will delete the message from the queue. If the consumer throws an abnormal downtime at this time, the message will be permanently lost

3. Modify manual message confirmation

// false: manual message confirmation
channel.basicConsume("queue1", false, consumer);

The results are as follows:

[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-1witi6t3-163305745140) (E: \ markdown \ hook notes \ RabbitMQ message confirmation mechanism ACK)]

solve the problem

/**
 * @auther wei
 * @date 2021/9/27 14:01
 * @description Message Receiver 
 */
public class RecerByACK {

    public static void main(String[] args) throws Exception {
        // 1. Get connected
        Connection connection = ConnectionUtil.getConnection();

        // 2. Obtain channel (channel)
        final Channel channel = connection.createChannel();

        // 3. Get message from channel
        DefaultConsumer consumer = new DefaultConsumer(channel){
            @Override   // Delivery processing (recipient information, express label on package, protocol configuration, message)
            public void handleDelivery(String consumerTag, Envelope envelope, AMQP.BasicProperties properties, byte[] body) throws IOException {
                // body is the message obtained from the queue
                String s = new String(body);
                System.out.println("receive = " + s);
                // Manual confirmation (recipient information, whether to confirm multiple messages at the same time)
                channel.basicAck(envelope.getDeliveryTag(),false);
            }
        };

        // 4. Listening queue false: manual message confirmation
        channel.basicConsume("queue1",false,consumer);
    }
}
2.3.2 work queue mode

[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-vjqv6uwh-163305745142) (E: \ markdown \ hook notes \ RabbitMQ mode - work queue mode)]

The simple model we learned earlier is that a consumer processes messages. If the producer produces too many messages too fast and the consumer's ability is limited, messages will accumulate in the queue (unsalable in life)

A barbecue chef roasts 50 mutton kebabs at a time. If one person eats them, there will be more and more roasted meat kebabs. How to deal with it?

Just solicit more customers for consumption. When we run many consumer programs, the tasks in the message queue will be shared by many consumers, but one message will only be obtained by one consumer (20 people eat 100 kebabs, but one kebab can only be eaten by one person)

2.3.2.1 producer P
/**
 * @auther wei
 * @date 2021/9/27 14:23
 * @description Message producer
 */
public class Sender {

    public static void main(String[] args) throws Exception {
        // 1. Get connected
        Connection connection = ConnectionUtil.getConnection();

        // 2. Create a channel (channel) from the connecting shaft
        Channel channel = connection.createChannel();

        // 3. Create message queue (1,2,3,4,5)
        channel.queueDeclare("test_work_queue",false,false,false,null);

        // 4. Send messages to the specified queue (1,2,3,4)
        for (int i = 1; i <= 100; i++) {
            String  msg = "mutton shashlik --> " + i;
            channel.basicPublish("","test_work_queue",null,msg.getBytes());
            System.out.println("Fresh out:" + msg);
        }
        
        // 5. Release resources
        channel.close();
        connection.close();
    }
}
2.3.2.2 consumers 1
/**
 * @auther wei
 * @date 2021/9/27 14:26
 * @description Consumer 1
 */
public class Recer1 {

    static int i = 1;   // Count the number of mutton kebabs eaten

    public static void main(String[] args) throws Exception {
        // 1. Get connected
        Connection connection = ConnectionUtil.getConnection();

        // 2. Obtain channel (channel)
        final Channel channel = connection.createChannel();

        // queueDeclare() this method has dual functions. If the opponent does not exist, it will be created; if the queue exists, it will be obtained
        channel.queueDeclare("test_work_queue",false,false,false,null);

        // 3. Get message from channel
        DefaultConsumer consumer = new DefaultConsumer(channel){
            @Override   // Delivery processing (recipient information, express label on package, protocol configuration, message)
            public void handleDelivery(String consumerTag, Envelope envelope, AMQP.BasicProperties properties, byte[] body) throws IOException {
                // body is the message obtained from the queue
                String s = new String(body);
                System.out.println("[Customer 1] eat " + s + "!Total eat[" + i++ + "String!]");

                try {
                    // Analog network delay
                    Thread.sleep(200);
                } catch (Exception e) {
                    e.printStackTrace();
                }

                // Manual confirmation (recipient information, whether to confirm multiple messages at the same time)
                channel.basicAck(envelope.getDeliveryTag(),false);
            }
        };

        // 4. Listening queue false: manual message confirmation
        channel.basicConsume("test_work_queue",false,consumer);
    }
}
2.3.2.3 consumers 2
/**
 * @auther wei
 * @date 2021/9/27 14:26
 * @description Consumer 2
 */
public class Recer2 {

    static int i = 1;   // Count the number of mutton kebabs eaten

    public static void main(String[] args) throws Exception {
        // 1. Get connected
        Connection connection = ConnectionUtil.getConnection();

        // 2. Obtain channel (channel)
        final Channel channel = connection.createChannel();

        // queueDeclare() this method has dual functions. If the opponent base does not exist, it will be created; Gets if the queue exists
        channel.queueDeclare("test_work_queue",false,false,false,null);

        // 3. Get message from channel
        DefaultConsumer consumer = new DefaultConsumer(channel){
            @Override   // Delivery processing (recipient information, express label on package, protocol configuration, message)
            public void handleDelivery(String consumerTag, Envelope envelope, AMQP.BasicProperties properties, byte[] body) throws IOException {
                // body is the message obtained from the queue
                String s = new String(body);
                System.out.println("[Customer 2] eat " + s + "!Total eat[" + i++ + "String!]");

                try {
                    // Analog network delay
                    Thread.sleep(900);
                } catch (Exception e) {
                    e.printStackTrace();
                }

                // Manual confirmation (recipient information, whether to confirm multiple messages at the same time)
                channel.basicAck(envelope.getDeliveryTag(),false);
            }
        };

        // 4. Listening queue false: manual message confirmation
        channel.basicConsume("test_work_queue",false,consumer);
    }
}

First run two consumers, wait in line for consumption (pick up meal), and then run the producer to start production message (barbecue string)

Although the consumption speed of the two consumers is inconsistent (thread sleep time), the consumption quantity is the same, consuming 50 messages each

For example, in work, classmate a has a high coding rate and classmate B has a low coding rate. Two people develop a project at the same time. They complete it in A10 days and B30 days. When a completes his own coding part, he will have nothing to do. Just wait for B to complete it. This is not possible. We should follow the "those who can do more"

More work points for high efficiency and less work points for low efficiency

See how the following official website gives solutions:

Fair distribution

You may have noticed that the assignment still can't work completely according to our requirements. For example, if there are two employees, when all strange messages are heavy or even light, one employee will always be very busy, while the other will do almost nothing. Well, RabbitMQ knows nothing about it, and it still dispatches messages evenly.

This is because RabbitMQ only sends messages when they enter the queue. It does not view the number of user unacknowledged messages. It just blindly assigns every Nth message to the nth consumer.

To overcome this problem, we can use the basicQos method set to prefetchCount = 1. This tells RabbitMQ not to send more than one message to a worker at a time. Or, in other words, don't send a new message to the worker until it processes and confirms the previous message. Instead, it will dispatch it to the next worker who is not busy.

// Declaration queue (this is a consumer, not a declaration queue, and the two codes are the same) queuing at the outlet
channel.queueDeclare("test_work_queue",false,false,false,null);
// It can be understood as: express delivery one by one, send one after another, and then send the next. There are more fast deliveries
channel.basicQos(1);

Those who can do more work must cooperate with the manual ACK mechanism to take effect

2.3.2.4 interview question: avoid message accumulation?
  1. workqueue: multiple consumers listen to the same queue
  2. After receiving the message, it is consumed asynchronously through the thread pool
2.3.3 publish subscribe mode

See the official website:

Publish/Subscribe

​ In the previous tutorial we created a work queue. The assumption behind a work queue is that each task is delivered to exactly one worker. In this part we'll do something completely different – we'll deliver a message to multiple consumers. This pattern is known as "publish/subscribe".

​ To illustrate the pattern, we're going to build a simple logging system. It will consist of two programs – the first will emit log messages and the second will receive and print them.

​ In our logging system every running copy of the receiver program will get the messages. That way we'll be able to run one receiver and direct the logs to disk; and at the same time we'll be able to run another receiver and see the logs on the screen.

​ Essentially, published log messages are going to be broadcast to all the receivers.

Publish subscribe

In the last tutorial, we created a work queue. The assumption behind the work queue is that each task is accurately delivered to a worker. In this section, we'll do something completely different -- deliver messages to multiple consumers. This pattern is called publish / subscribe.

To demonstrate this pattern, we will build a simple logging system. It will consist of two programs -- the first will send log messages, and the second will receive and print them.

In our logging system, each running copy of the receiver will get a message. In this way, we can run a receiver and point the log to disk; At the same time, we can run another receiver and see the log on the screen.

Basically, published log messages will be broadcast to all recipients.

Tiktok Kwai: a lot of fans are watching a video host, video host is releasing video, all fans can get video notification.

[the external chain picture transfer fails, and the source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-zusk9wsz-163305745143) (E: \ markdown \ hook notes \ RabbitMQ mode - publish subscription mode 1)]

In the above figure, X is the video master, and the red queue is the fans. Binding means binding (attention)

P producer sends information to x route, and X forwards the information to the queue bound to X

[the external chain image transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-1y5yyg9e-163305745145) (E: \ markdown \ hook notes \ RabbitMQ mode - publish subscription mode 2)]

The X queue sends the information to the consumer through the channel for consumption

Throughout the process, you must first create a route

Routes are created in the producer program

Because the route does not have the ability to store messages, when the producer sends the information to the route, the consumer does not run, so there is no queue, and the route does not know who to send the information to

Sequence of running programs:

​ 1.MessageSender

2.MessageReceiver1 and MessageReceiver2

​ 3.MessageSender

2.3.3.1 producers
/**
 * @auther wei
 * @date 2021/9/27 14:23
 * @description Message producer
 */
public class Sender {

    public static void main(String[] args) throws Exception {
        // 1. Get connected
        Connection connection = ConnectionUtil.getConnection();

        // 2. Create a channel (channel) from the connecting shaft
        Channel channel = connection.createChannel();
        
        // 3. Declare route (route name, route type)
        // fanout: do not process routing keys (just bind the queue to the route, and the messages sent to the route will be forwarded to all pairs of columns bound to the route)
        channel.exchangeDeclare("test_exchange_fanout","fanout");
        
        // 4. Send messages to the specified queue (1,2,3,4)
        String  msg = "hello,hello everyone!";
        channel.basicPublish("test_exchange_fanout","",null,msg.getBytes());
        System.out.println("producer:" + msg);

        // 5. Release resources
        channel.close();
        connection.close();
    }
}
2.3.3.2 consumers 1
/**
 * @auther wei
 * @date 2021/9/27 14:01
 * @description Message consumer 1
 */
public class Recer1 {

    public static void main(String[] args) throws Exception {
        // 1. Get connected
        Connection connection = ConnectionUtil.getConnection();

        // 2. Obtain channel (channel)
        Channel channel = connection.createChannel();

        // 3. Claim queue
        channel.queueDeclare("test_exchange_fanout_queue_1",false,false,false,null);

        // 4. Binding route
        channel.queueBind("test_exchange_fanout_queue_1","test_exchange_fanout","");

        // 5. Get message from channel
        DefaultConsumer consumer = new DefaultConsumer(channel){
            @Override   // Delivery processing (recipient information, express label on package, protocol configuration, message)
            public void handleDelivery(String consumerTag, Envelope envelope, AMQP.BasicProperties properties, byte[] body) throws IOException {
                // body is the message obtained from the queue
                String s = new String(body);
                System.out.println("[Consumer 1] = " + s);
            }
        };

        // 6. Listening queue true: automatic message confirmation
        channel.basicConsume("test_exchange_fanout_queue_1",true,consumer);
    }
}
2.3.3.3 consumers 2
/**
 * @auther wei
 * @date 2021/9/27 14:01
 * @description Message consumer 2
 */
public class Recer2 {

    public static void main(String[] args) throws Exception {
        // 1. Get connected
        Connection connection = ConnectionUtil.getConnection();

        // 2. Obtain channel (channel)
        Channel channel = connection.createChannel();

        // 3. Claim queue
        channel.queueDeclare("test_exchange_fanout_queue_2",false,false,false,null);

        // 4. Binding route
        channel.queueBind("test_exchange_fanout_queue_2","test_exchange_fanout","");

        // 5. Get message from channel
        DefaultConsumer consumer = new DefaultConsumer(channel){
            @Override   // Delivery processing (recipient information, express label on package, protocol configuration, message)
            public void handleDelivery(String consumerTag, Envelope envelope, AMQP.BasicProperties properties, byte[] body) throws IOException {
                // body is the message obtained from the queue
                String s = new String(body);
                System.out.println("[Consumer 2] = " + s);
            }
        };

        // 6. Listening queue true: automatic message confirmation
        channel.basicConsume("test_exchange_fanout_queue_2",true,consumer);
    }
}
2.3.4 routing mode

[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-ps5dtmwk-163305745147) (E: \ markdown \ hook notes \ RabbitMQ mode - routing mode)]

Routing will distribute messages to different queues according to the type, as shown in the figure

It can be understood as the sorting center of the express company. The whole community is delivered by Lou Xiaozhang in the East and Lou Xiaowang in the West

2.3.4.1 producers
/**
 * @auther wei
 * @date 2021/9/27 14:23
 * @description Message producer
 */
public class Sender {

    public static void main(String[] args) throws Exception {
        // 1. Get connected
        Connection connection = ConnectionUtil.getConnection();

        // 2. Create a channel (channel) from the connecting shaft
        Channel channel = connection.createChannel();
        
        // 3. Declare route (route name, route type)
        // direct: distribute messages according to routing keys
        channel.exchangeDeclare("test_exchange_direct","direct");

        // 4. Send messages to the specified queue (1,2,3,4)
        String  msg = "User registration[ userid=S101]";
        channel.basicPublish("test_exchange_direct","insert",null,msg.getBytes());
        System.out.println("[User system]: " + msg);

        // 5. Release resources
        channel.close();
        connection.close();
    }
}
2.3.4.2 consumers 1
/**
 * @auther wei
 * @date 2021/9/27 14:01
 * @description Message consumer 1
 */
public class Recer1 {

    public static void main(String[] args) throws Exception {
        // 1. Get connected
        Connection connection = ConnectionUtil.getConnection();

        // 2. Obtain channel (channel)
        Channel channel = connection.createChannel();

        // 3. Claim queue
        channel.queueDeclare("test_exchange_direct_queue_1",false,false,false,null);

        // 4. Bind route (if the type of route key is add, delete or modify, bind to this queue 1)
        channel.queueBind("test_exchange_direct_queue_1","test_exchange_direct","insert");
        channel.queueBind("test_exchange_direct_queue_1","test_exchange_direct","update");
        channel.queueBind("test_exchange_direct_queue_1","test_exchange_direct","delete");

        // 5. Get message from channel
        DefaultConsumer consumer = new DefaultConsumer(channel){
            @Override   // Delivery processing (recipient information, express label on package, protocol configuration, message)
            public void handleDelivery(String consumerTag, Envelope envelope, AMQP.BasicProperties properties, byte[] body) throws IOException {
                // body is the message obtained from the queue
                String s = new String(body);
                System.out.println("[Consumer 1] = " + s);
            }
        };

        // 6. Listening queue true: automatic message confirmation
        channel.basicConsume("test_exchange_direct_queue_1",true,consumer);
    }
}
2.3.4.3 consumers 2
/**
 * @auther wei
 * @date 2021/9/27 14:01
 * @description Message consumer 2
 */
public class Recer2 {

    public static void main(String[] args) throws Exception {
        // 1. Get connected
        Connection connection = ConnectionUtil.getConnection();

        // 2. Obtain channel (channel)
        Channel channel = connection.createChannel();

        // 3. Claim queue
        channel.queueDeclare("test_exchange_direct_queue_2",false,false,false,null);

        // 4. Bind route (if the type of route key is query, bind to this queue 2)
        channel.queueBind("test_exchange_direct_queue_2","test_exchange_direct","select");

        // 5. Get message from channel
        DefaultConsumer consumer = new DefaultConsumer(channel){
            @Override   // Delivery processing (recipient information, express label on package, protocol configuration, message)
            public void handleDelivery(String consumerTag, Envelope envelope, AMQP.BasicProperties properties, byte[] body) throws IOException {
                // body is the message obtained from the queue
                String s = new String(body);
                System.out.println("[Consumer 2] = " + s);
            }
        };

        // 6. Listening queue true: automatic message confirmation
        channel.basicConsume("test_exchange_direct_queue_2",true,consumer);
    }
}
  1. Remember the order of running the program and run sender once first
  2. After having a router, create two Recer1 and Recer2 for queue binding
  3. Run sender again and send a message
2.3.5 wildcard pattern

[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-bxeggqdq-163305745148) (E: \ markdown \ hook notes \ RabbitMQ mode - wildcard mode)]

It is 90% the same as the routing mode.

The only difference is that routing keys support fuzzy matching

Match symbol

*: only one word can be matched (exactly one word, no more, no less)

#: match 0 or more words

Take a look at the case on the official website:

Q1 is bound with routing keys *. orange.* Q2 is bound with routing keys *. rabbit and lazy#

Which queue will the following producer's messages be sent to?

quick.orange.rabbit # Q1 Q2
lazy.orange.elephant # Q1 Q2
quick.orange.fox # Q1
lazy.brown.fox # Q2
lazy.pink.rabbit # Q2
quick.brown.fox # nothing
orange # nothing
quick.orange.male.rabbit # nothing
2.3.5.1 producers
/**
 * @auther wei
 * @date 2021/9/27 14:23
 * @description Message producer
 */
public class Sender {

    public static void main(String[] args) throws Exception {
        // 1. Get connected
        Connection connection = ConnectionUtil.getConnection();

        // 2. Create a channel (channel) from the connecting shaft
        Channel channel = connection.createChannel();
        
        // 3. Declare route (route name, route type)
        // topic: directional distribution of fuzzy matching
        channel.exchangeDeclare("test_exchange_topic","topic");

        // 4. Send messages to the specified queue (1,2,3,4)
        String  msg = "User registration[ userid=S101]";
        channel.basicPublish("test_exchange_topic","user.register",null,msg.getBytes());
        System.out.println("[User system]: " + msg);

        // 5. Release resources
        channel.close();
        connection.close();
    }
}
2.3.5.2 consumers 1
/**
 * @auther wei
 * @date 2021/9/27 14:01
 * @description Message consumer 1
 */
public class Recer1 {

    public static void main(String[] args) throws Exception {
        // 1. Get connected
        Connection connection = ConnectionUtil.getConnection();

        // 2. Obtain channel (channel)
        Channel channel = connection.createChannel();

        // 3. Claim queue
        channel.queueDeclare("test_exchange_topic_queue_1",false,false,false,null);

        // 4. Bind route (bind user related messages)
        channel.queueBind("test_exchange_topic_queue_1","test_exchange_topic","user.#");

        // 5. Get message from channel
        DefaultConsumer consumer = new DefaultConsumer(channel){
            @Override   // Delivery processing (recipient information, express label on package, protocol configuration, message)
            public void handleDelivery(String consumerTag, Envelope envelope, AMQP.BasicProperties properties, byte[] body) throws IOException {
                // body is the message obtained from the queue
                String s = new String(body);
                System.out.println("[Consumer 1] = " + s);
            }
        };

        // 6. Listening queue true: automatic message confirmation
        channel.basicConsume("test_exchange_topic_queue_1",true,consumer);
    }
}
2.3.5.3 consumers 2
/**
 * @auther wei
 * @date 2021/9/27 14:01
 * @description Message consumer 2
 */
public class Recer2 {

    public static void main(String[] args) throws Exception {
        // 1. Get connected
        Connection connection = ConnectionUtil.getConnection();

        // 2. Obtain channel (channel)
        Channel channel = connection.createChannel();

        // 3. Claim queue
        channel.queueDeclare("test_exchange_topic_queue_2",false,false,false,null);

        // 4. Binding route (binding messages related to goods and orders)
        channel.queueBind("test_exchange_topic_queue_2","test_exchange_topic","product.#");
        channel.queueBind("test_exchange_topic_queue_2","test_exchange_topic","order.#");

        // 5. Get message from channel
        DefaultConsumer consumer = new DefaultConsumer(channel){
            @Override   // Delivery processing (recipient information, express label on package, protocol configuration, message)
            public void handleDelivery(String consumerTag, Envelope envelope, AMQP.BasicProperties properties, byte[] body) throws IOException {
                // body is the message obtained from the queue
                String s = new String(body);
                System.out.println("[Consumer 2] = " + s);
            }
        };

        // 6. Listening queue true: automatic message confirmation
        channel.basicConsume("test_exchange_topic_queue_2",true,consumer);
    }
}

2.4 persistence

Message reliability is a major feature of RabbitMQ. How does RabbitMQ avoid message loss?

The ACK confirmation mechanism of consumers can prevent consumers from losing messages

If the RabbitMQ server goes down before the consumer consumes, the message will also be lost

If you want to persist messages, both routing and queues must be persistent

2.4.1 producers
/**
 * @auther wei
 * @date 2021/9/27 14:23
 * @description Message producer
 */
public class Sender {

    public static void main(String[] args) throws Exception {
        // 1. Get connected
        Connection connection = ConnectionUtil.getConnection();

        // 2. Create a channel (channel) from the connecting shaft
        Channel channel = connection.createChannel();
        
        // 3. Declare route (route name, route type, persistence)
        // topic: directional distribution of fuzzy matching
        channel.exchangeDeclare("test_exchange_topic","topic",true);

        // 4. Send messages to the specified queue (1,2,3,4)
        String  msg = "Commodity price reduction";
        channel.basicPublish("test_exchange_topic","product.price", MessageProperties.PERSISTENT_TEXT_PLAIN,msg.getBytes());
        System.out.println("[User system]: " + msg);

        // 5. Release resources
        channel.close();
        connection.close();
    }
}
2.4.2 consumers
/**
 * @auther wei
 * @date 2021/9/27 14:01
 * @description Message consumer 1
 */
public class Recer1 {

    public static void main(String[] args) throws Exception {
        // 1. Get connected
        Connection connection = ConnectionUtil.getConnection();

        // 2. Obtain channel (channel)
        Channel channel = connection.createChannel();

        // 3. Declare queue (the second parameter is true: it means persistence is supported)
        channel.queueDeclare("test_exchange_topic_queue_1",true,false,false,null);

        // 4. Bind route (bind user related messages)
        channel.queueBind("test_exchange_topic_queue_1","test_exchange_topic","user.#");

        // 5. Get message from channel
        DefaultConsumer consumer = new DefaultConsumer(channel){
            @Override   // Delivery processing (recipient information, express label on package, protocol configuration, message)
            public void handleDelivery(String consumerTag, Envelope envelope, AMQP.BasicProperties properties, byte[] body) throws IOException {
                // body is the message obtained from the queue
                String s = new String(body);
                System.out.println("[Consumer 1] = " + s);
            }
        };

        // 6. Listening queue true: automatic message confirmation
        channel.basicConsume("test_exchange_topic_queue_1",true,consumer);
    }
}

2.5 Spring integration RabbitMQ

Among the five message models, the last one is the most widely used in Enterprises: directional matching topic

Spring AMQP is an AMQP message solution based on the spring framework. It provides a templated abstraction layer for sending and receiving messages, and provides message listening based on message driven POJO, which simplifies our development of RabbitMQ related programs.

2.5.1 production end Engineering

rely on

<dependency>
    <groupId>org.springframework.amqp</groupId>
    <artifactId>spring-rabbit</artifactId>
    <version>2.0.1.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-log4j12</artifactId>
    <version>1.7.25</version>
    <scope>compile</scope>
</dependency>
<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-lang3</artifactId>
    <version>3.9</version>
</dependency>

spring-rabbitmq-producer.xml

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:rabbit="http://www.springframework.org/schema/rabbit"
       xsi:schemaLocation="
        http://www.springframework.org/schema/beans
        http://www.springframework.org/schema/beans/spring-beans.xsd
        http://www.springframework.org/schema/rabbit
        http://www.springframework.org/schema/rabbit/spring-rabbit.xsd">

    <!--1.Configure connection factory-->
    <rabbit:connection-factory id="connectionFactory" 
                               host="192.168.44.129" 
                               port="5672" 
                               username="wei" 
                               password="123123" 
                               virtual-host="/lagou"/>

    <!--2.Configure queue-->
    <rabbit:queue name="test_spring_queue_1"/>
    
    <!--3.to configure rabbitAdmin:Mainly used in java The queue management in the code is used to create, bind, delete queues and switches, send messages, etc-->
    <rabbit:admin connection-factory="connectionFactory"/>
    
    <!--4.Configure the switch, topic type-->
    <rabbit:topic-exchange name="spring_topic_exchange">
        <rabbit:bindings>
            <!--Bind queue-->
            <rabbit:binding pattern="msg.#" queue="test_spring_queue_1"></rabbit:binding>
        </rabbit:bindings>
    </rabbit:topic-exchange>
    
    <!--5.to configure json Conversion tool-->
    <bean id="jsonMessageConverter" class="org.springframework.amqp.support.converter.Jackson2JsonMessageConverter"/>
    
    <!--6.to configure rabbitmq Template for-->
    <rabbit:template id="rabbitTemplate" 
                     connection-factory="connectionFactory" 
                     exchange="spring_topic_exchange" 
                     message-converter="jsonMessageConverter"/>
    
</beans>

Send a message

/**
 * @auther wei
 * @date 2021/9/27 20:34
 * @description Message producer
 */
public class Sender {

    public static void main(String[] args) {

        // 1. Create spring container
        ClassPathXmlApplicationContext context = new ClassPathXmlApplicationContext("spring/spring-rabbitmq-producer.xml");

        // 2. Get the rabbit template object from the container
        RabbitTemplate rabbitTemplate = context.getBean(RabbitTemplate.class);

        // 3. Send message
        Map<String, String> map = new HashMap<>();
        map.put("name","Microenterprise");
        map.put("email","15952037019@163.com");
        rabbitTemplate.convertAndSend("msg.user",map);
        context.close();
    }
}
2.5.2 consumer Engineering

Dependence is consistent with the producer

spring-rabbitmq-consumer.xml

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:rabbit="http://www.springframework.org/schema/rabbit"
       xmlns:context="http://www.springframework.org/schema/context"
       xsi:schemaLocation="
       	http://www.springframework.org/schema/beans
        http://www.springframework.org/schema/beans/spring-beans.xsd
        http://www.springframework.org/schema/rabbit
        http://www.springframework.org/schema/rabbit/spring-rabbit.xsd
        http://www.springframework.org/schema/context
        http://www.springframework.org/schema/context/spring-context.xsd">
    <!-- 1. configure connections -->
    <rabbit:connection-factory
                               id="connectionFactory"
                               host="192.168.204.141"
                               port="5672"
                               username="laosun"
                               password="123123"
                               virtual-host="/lagou"
                               />
    <!-- 2. Configure queue -->
    <rabbit:queue name="test_spring_queue_1"/>
    <!-- 3.to configure rabbitAdmin -->
    <rabbit:admin connection-factory="connectionFactory"/>
    <!-- 4.springIOC Annotation scan package-->
    <context:component-scan base-package="listener"/>
    <!-- 5.Configure listening -->
    <rabbit:listener-container connection-factory="connectionFactory">
        <rabbit:listener ref="consumerListener" queuenames="test_spring_queue_1" />
    </rabbit:listener-container>
</beans>

consumer

The MessageListener interface is used to process messages after the spring container receives them

If you need to use your own defined type to process messages, you must implement the interface and override the onMessage() method

When the spring container receives a message, it will be automatically handed over to onMessage for processing

/**
 * @auther wei
 * @date 2021/9/28 11:47
 * @description Consumer listening queue
 */
@Component
public class ConsumerListener implements MessageListener {

    // Jackson provides the most used classes in serialization and deserialization to convert json
    private static final ObjectMapper MEPPER = new ObjectMapper();

    @Override
    public void onMessage(Message message) {
        try {
            // Convert message object to json
            JsonNode jsonNode = MEPPER.readTree(message.getBody());
            String name = jsonNode.get("name").asText();
            JsonNode email = jsonNode.get("email");
            System.out.println("Get from queue:[" + name +"Your email address is:" + email + "]");
        } catch (Exception e) {
            e.printStackTrace();
        }
    }
}

Start project

/**
 * @auther wei
 * @date 2021/9/28 12:10
 * @description Run project
 */
public class TestRunner {

    public static void main(String[] args) throws Exception {
        // Get container
        ClassPathXmlApplicationContext context = new ClassPathXmlApplicationContext("spring/spring-rabbitmq-consumer.xml");
        // Let the program run consistently without terminating
        System.in.read();
    }
}

2.6 message success confirmation mechanism

In the actual scenario, the messages sent by some producers must be successfully sent to the message queue, so how to ensure successful delivery?

Transaction mechanism

Release confirmation mechanism

2.6.1 transaction mechanism

AMQP protocol provides a way to ensure the successful delivery of messages. The transactional mode is enabled through the channel

The three methods of channel are used to send the message in transaction mode. If the sending fails, the transaction is rolled back through exception handling to ensure the successful delivery of the message

channel.txSelect(): start transaction

channel.txCommit(): commit transaction

channel.txRollback(): rollback transaction

Spring has encapsulated the above three methods, so we can only use the original code demonstration

2.6.1.1 producers
/**
 * @auther wei
 * @date 2021/9/27 14:23
 * @description Message producer
 */
public class Sender {

    public static void main(String[] args) throws Exception {
        // 1. Get connected
        Connection connection = ConnectionUtil.getConnection();

        // 2. Create a channel (channel) from the connecting shaft
        Channel channel = connection.createChannel();
        
        // 3. Declare route (route name, route type, persistence)
        // topic: directional distribution of fuzzy matching
        channel.exchangeDeclare("test_transaction","topic");

        // 4. Start transaction
        channel.txSelect();

        try {
            // 5. Send messages to the specified queue (1,2,3,4)
            channel.basicPublish("test_transaction", "product.price", null, "Commodity 1-Price reduction".getBytes());
            System.out.println(1 / 0);
            channel.basicPublish("test_transaction", "product.price", null, "Commodity 2-Price reduction".getBytes());

            // 6. Commit transaction (successful together)
            channel.txCommit();
            System.out.println("[ producer ]: All messages have been sent!");
        }catch (Exception e){
            System.out.println("Cancel all messages!");
            channel.txRollback();   // Transaction rollback (failed together)
            e.printStackTrace();
        }finally {
            // 7. Release resources
            channel.close();
            connection.close();
        }
    }
}
2.6.1.2 consumers
/**
 * @auther wei
 * @date 2021/9/27 14:01
 * @description Message consumer
 */
public class Recer {

    public static void main(String[] args) throws Exception {
        // 1. Get connected
        Connection connection = ConnectionUtil.getConnection();

        // 2. Obtain channel (channel)
        Channel channel = connection.createChannel();

        // 3. Declare queue (the second parameter is true: it means persistence is supported)
        channel.queueDeclare("test_transaction_queue",false,false,false,null);

        // 4. Bind route (bind user related messages)
        channel.queueBind("test_transaction_queue","test_transaction","product.#");

        // 5. Get message from channel
        DefaultConsumer consumer = new DefaultConsumer(channel){
            @Override   // Delivery processing (recipient information, express label on package, protocol configuration, message)
            public void handleDelivery(String consumerTag, Envelope envelope, AMQP.BasicProperties properties, byte[] body) throws IOException {
                // body is the message obtained from the queue
                String s = new String(body);
                System.out.println("[Consumer] = " + s);
            }
        };

        // 6. Listening queue true: automatic message confirmation
        channel.basicConsume("test_transaction_queue",true,consumer);
    }
}
2.6.2 Confirm release confirmation mechanism

In order to ensure the successful delivery of messages, RabbitMQ adopts the scheme of providing us with transaction mechanism through AMQP protocol level, but using transaction will greatly reduce the message throughput

According to the test results of Laosun's local SSD hard disk, 10w messages are not started, and the sending is completed in about 8s; After the transaction is started, it takes nearly 310s, which is more than 30 times worse.

Then Lao sun looked through the official website and found that it had been marked on the official website

​ Using standard AMQP 0-9-1, the only way to guarantee that a message isn't lost is by using transactions – make the channel transactional then for each message or set of messages publish, commit. In this case, transactions are unnecessarily heavyweight and decrease throughput by a factor of 250. To remedy this, a confirmation mechanism was introduced. It mimics the consumer acknowledgements mechanism already present in the protocol.

Key translation: the maximum loss of open transaction performance is more than 250 times

So is there a more efficient solution? The answer is to use Confirm mode.

Why is transaction efficiency so low? Imagine: for 10 messages, the first 9 are successful. If the tenth fails, all 9 messages should be revoked and rolled back. My wife is too wasteful

The confirm mode adopts the measure of reissuing Article 10 to complete the delivery of 10 messages

2.6.2.1 application in spring

spring-rabbitmq-producer.xml

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:rabbit="http://www.springframework.org/schema/rabbit"
       xsi:schemaLocation="
        http://www.springframework.org/schema/beans
        http://www.springframework.org/schema/beans/spring-beans.xsd
        http://www.springframework.org/schema/rabbit
        http://www.springframework.org/schema/rabbit/spring-rabbit.xsd">

    <!--1.Configure connection factory-->
    <rabbit:connection-factory id="connectionFactory"
                               host="192.168.44.129"
                               port="5672"
                               username="wei"
                               password="123123"
                               virtual-host="/lagou"
                               publisher-confirms="true"/>

    <!--2.Configure queue-->
    <rabbit:queue name="test_spring_queue_1"/>

    <!--3.to configure rabbitAdmin:Mainly used in java The queue management in the code is used to create, bind, delete queues and switches, send messages, etc-->
    <rabbit:admin connection-factory="connectionFactory"/>

    <!--4.Configure the switch, topic type-->
    <rabbit:topic-exchange name="spring_topic_exchange">
        <rabbit:bindings>
            <!--Bind queue-->
            <rabbit:binding pattern="msg.#" queue="test_spring_queue_1"></rabbit:binding>
        </rabbit:bindings>
    </rabbit:topic-exchange>

    <!--5.to configure json Conversion tool-->
    <bean id="jsonMessageConverter" class="org.springframework.amqp.support.converter.Jackson2JsonMessageConverter"/>

    <!--6.to configure rabbitmq Template for-->
    <rabbit:template id="rabbitTemplate"
                     connection-factory="connectionFactory"
                     exchange="spring_topic_exchange"
                     message-converter="jsonMessageConverter"
                     confirm-callback="messageConfirm"/>

    <!--7.Configuration confirmation mechanism processing class-->
    <bean id="messageConfirm" class="confirm.MessageConfirm"/>

</beans>

Message confirmation processing class

/**
 * @auther wei
 * @date 2021/9/28 16:04
 * @description Message confirmation processing
 */
public class MessageConfirm implements RabbitTemplate.ConfirmCallback {

    /**
     * @param correlationData   Data object related to the message (encapsulating the unique id of the message)
     * @param b Whether the message is confirmed successfully
     * @param s Abnormal information
     */
    @Override
    public void confirm(CorrelationData correlationData, boolean b, String s) {
        if (b == true){
            System.out.println("Message confirmation succeeded!");
        }else {
            System.out.println("xxxxx Message confirmation failed xxxxx");
            //System.out.println(s);
            // If this message must be sent to the queue, for example, the following order message, we can use message reissue
            // 1. Use recursion (limit the number of recursion)
            // 2.redis + timed task (jdk timer, or timed task framework Quartz)
        }
    }
}

log4j.properties

log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %m%n

log4j.appender.file=org.apache.log4j.FileAppender
log4j.appender.file.File=rabbitmq.log
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %l%m%n

log4j.rootLogger=debug, stdout,file

send message

/**
 * @auther wei
 * @date 2021/9/27 20:34
 * @description Message producer
 */
public class Sender {

    public static void main(String[] args) {

        // 1. Create spring container
        ClassPathXmlApplicationContext context = new ClassPathXmlApplicationContext("spring/spring-rabbitmq-producer.xml");

        // 2. Get the rabbit template object from the container
        RabbitTemplate rabbitTemplate = context.getBean(RabbitTemplate.class);

        // 3. Send message
        Map<String, String> map = new HashMap<>();
        map.put("name","Lv Bu");
        map.put("email","6666@163.com");
        rabbitTemplate.convertAndSend("lalala","msg.user",map);
        System.out.println("Message sent...");
        context.close();
    }
}

2.7 current limiting at consumer end

If you walk in the desert and don't drink water for 3 days, you will die suddenly if you drink it hard. You should drink it slowly

Our Rabbitmq server has a backlog of thousands of unprocessed messages, and then randomly open a consumer client. This will happen: a huge amount of messages will be rushed and pushed in an instant, but a single client can't process so much data at the same time, and it will collapse

Therefore, when the amount of data is particularly large, it is certainly unscientific for us to limit the flow at the production end, because sometimes the amount of concurrency is particularly large, and sometimes the amount of concurrency is particularly small. This is the behavior of users, and we cannot restrict it

Therefore, we should limit the current at the consumer end to maintain the stability of the consumer end

For example, automobile enterprises keep producing cars. There are many inventory cars in 4S stores that can't be sold, but they won't reduce the price. It's just to ensure the stability of the market value. If we produce as many cars as we can, the market will be chaotic regardless of the price. Therefore, we need to use the constant price to stabilize consumers' car purchase in order to develop smoothly

RabbitMQ provides a Qos (Quality of Service) service quality assurance function

That is, on the premise of non automatic confirmation messages, if a certain number of messages are not confirmed, no new messages will be consumed

The producer uses a loop to issue multiple messages

public class Sender {

    public static void main(String[] args) {

        // 1. Create spring container
        ClassPathXmlApplicationContext context = new ClassPathXmlApplicationContext("spring/spring-rabbitmq-producer.xml");

        // 2. Get the rabbit template object from the container
        RabbitTemplate rabbitTemplate = context.getBean(RabbitTemplate.class);

        // 3. Send message
        Map<String, String> map = new HashMap<>();
        map.put("name","Lv Bu");
        map.put("email","6666@163.com");

        for (int i = 1; i <= 10; i++) {
            rabbitTemplate.convertAndSend("msg.user",map);
            System.out.println("Message sent...");
        }

        context.close();
    }
}

Produce 10 stacked unprocessed messages

[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (IMG dezmagnh-163305745150) (E: \ markdown \ hook notes \ RabbitMQ consumer end current limit)]

Current limiting treatment by consumers

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:rabbit="http://www.springframework.org/schema/rabbit"
       xmlns:context="http://www.springframework.org/schema/context"
       xsi:schemaLocation="
        http://www.springframework.org/schema/beans
        http://www.springframework.org/schema/beans/spring-beans.xsd
        http://www.springframework.org/schema/rabbit
        http://www.springframework.org/schema/rabbit/spring-rabbit.xsd
        http://www.springframework.org/schema/context
        http://www.springframework.org/schema/context/spring-context.xsd">

    <!--1.Configure connection factory-->
    <rabbit:connection-factory id="connectionFactory" 
                               host="192.168.44.129" 
                               port="5672" 
                               username="wei" 
                               password="123123" 
                               virtual-host="/lagou"/>

    <!--2.Configure queue-->
    <rabbit:queue name="test_spring_queue_1"/>

    <!--3.to configure rabbitAdmin:Mainly used in java The queue management in the code is used to create, bind, delete queues and switches, send messages, etc-->
    <rabbit:admin connection-factory="connectionFactory"/>

    <!--4..springIOC Annotation scan package-->
    <context:component-scan base-package="listener"/>

    <!--5.Configure listening-->
    <!-- prefetch="3" Number of messages consumed at one time. Will tell RabbitMQ Don't push more than to one consumer at the same time N A message, once there is N No news yet ack,Then consumer Will block until the message is ack-->
    <!-- acknowledge-mode: manual Manual confirmation-->
    <rabbit:listener-container connection-factory="connectionFactory" prefetch="3" acknowledge="manual">
        <rabbit:listener ref="consumerListener" queue-names="test_spring_queue_1"/>
    </rabbit:listener-container>
    
</beans>
/**
 * @auther wei
 * @date 2021/9/28 11:47
 * @description Consumer listening queue
 * AbstractAdaptableMessageListener An abstract base class used to process messages after the spring container receives them
 */
@Component
public class ConsumerListener extends AbstractAdaptableMessageListener {

    // Jackson provides the most used classes in serialization and deserialization to convert json
    private static final ObjectMapper MEPPER = new ObjectMapper();

    @Override
    public void onMessage(Message message, Channel channel) throws Exception {
        try {
            // Convert message object to json
            JsonNode jsonNode = MEPPER.readTree(message.getBody());
            String name = jsonNode.get("name").asText();
            JsonNode email = jsonNode.get("email");
            System.out.println("Get from queue:[" + name +"Your email address is:" + email + "]");

            // Manual confirmation message (parameter 1, parameter 2)
            /**
             * Parameter 1: the unique ID of the message delivered by RabbitMQ to the channel. This ID is a monotonically increasing positive integer
             * Parameter 2: in order to reduce network traffic, manual confirmation can be processed in batch. When this parameter is true, all messages less than or equal to msgId value can be confirmed at one time
             */
            long msgId = message.getMessageProperties().getDeliveryTag();
            channel.basicAck(msgId,true);

            Thread.sleep(3000);
            System.out.println("After 3 seconds of rest, continue to receive messages!");
        } catch (Exception e) {
            e.printStackTrace();
        }
    }
}

Acknowledge receipt of 3 messages at a time

[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-zddw71ok-163305745151) (E: \ markdown \ hook notes \ RabbitMQ consumer end current limit 02)]

2.8 expiration time TTL

Time To Live: survival time and how long it can live, in milliseconds

In this cycle, messages can be consumed normally by consumers. After this time, they will be deleted automatically (in fact, they are called dead message and put into the dead letter queue, so they cannot be consumed)

RabbitMQ can set TTL for messages and queues

Through queue settings, all messages in the queue have the same expiration time

Set the message separately, and the TTL of each message can be different (more granular)

2.8.1 setting queue TTL

spring-rabbitmq-producer.xml

<!--2.Reconfigure a queue and set the expiration time for the messages in the queue-->
<rabbit:queue name="test_spring_queue_ttl" auto-declare="true">
    <rabbit:queue-arguments>
        <entry key="x-message-ttl" value-type="long" value="5000"/>
    </rabbit:queue-arguments>
</rabbit:queue>

[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-co9lb5jt-163305745153) (E: \ markdown \ tick notes \ RabbitMQ expiration time TTL - set queue TTL)]

After 5 seconds, the message is automatically deleted

[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-eu0ga1dd-163305745154) (E: \ markdown \ tick notes \ RabbitMQ expiration time TTL - set queue TTL02)]

2.8.2 setting message TTL

To set the ttl of a message, you only need to specify it when creating a send message

<!--2.Configure queue-->
<rabbit:queue name="test_spring_queue_ttl_2">
public class Sender2 {

    public static void main(String[] args) {

        // 1. Create spring container
        ClassPathXmlApplicationContext context = new ClassPathXmlApplicationContext("spring/spring-rabbitmq-producer.xml");

        // 2. Get the rabbit template object from the container
        RabbitTemplate rabbitTemplate = context.getBean(RabbitTemplate.class);

        // 3. Create the configuration object of the message
        MessageProperties properties = new MessageProperties();

        // 4. Set the expiration time to 3 seconds
        properties.setExpiration("3000");

        // 5. Create message
        Message message = new Message("Test expiration time".getBytes(),properties);

        // 6. Send message
        rabbitTemplate.convertAndSend("msg.user",message);
        System.out.println("Message sent...");

        context.close();
    }
}

If the TTL values of queue and message are set at the same time, the smaller of them will work

2.9 dead letter queue

Dlx (Dead Letter Exchanges) dead letter switch / dead letter mailbox. When messages in the queue become dead message s because they are not consumed in time for some reasons, these messages will be distributed to the DLX switch, and the queue bound to the DLX switch is called "dead letter queue"

Reasons why messages are not consumed in time:

The message is rejected (basic.reject/ basic.nack) and will not be re posted. Request = false

Message timeout not consumed

Maximum queue length reached

[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-pzdj56ak-163305745156) (E: \ markdown \ hook notes \ RabbitMQ dead letter queue)]

spring-rabbitmq-producer-dlx.xml

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:rabbit="http://www.springframework.org/schema/rabbit"
       xsi:schemaLocation="
        http://www.springframework.org/schema/beans
        http://www.springframework.org/schema/beans/spring-beans.xsd
        http://www.springframework.org/schema/rabbit
        http://www.springframework.org/schema/rabbit/spring-rabbit.xsd">

    <!--1.Configure connection factory-->
    <rabbit:connection-factory id="connectionFactory"
                               host="192.168.44.129"
                               port="5672"
                               username="wei"
                               password="123123"
                               virtual-host="/lagou"
                               publisher-confirms="true"/>


    <!--3.to configure rabbitAdmin:Mainly used in java The queue management in the code is used to create, bind, delete queues and switches, send messages, etc-->
    <rabbit:admin connection-factory="connectionFactory"/>

    <!--6.to configure rabbitmq Template for-->
    <rabbit:template id="rabbitTemplate" connection-factory="connectionFactory" exchange="my_exchange"/>

    <!--####################################################################################-->
    <!--Declare dead letter queue-->
    <rabbit:queue name="dlx_queue"/>

    <!--Claim directed dead letter switch-->
    <rabbit:direct-exchange name="dlx_exchange">
        <rabbit:bindings>
            <rabbit:binding key="dlx_ttl" queue="dlx_queue"/>
            <rabbit:binding key="dlx_max" queue="dlx_queue"/>
        </rabbit:bindings>
    </rabbit:direct-exchange>

    <!--Switch declaring directed test messages-->
    <rabbit:direct-exchange name="my_exchange">
        <rabbit:bindings>
            <rabbit:binding key="dlx_ttl" queue="test_ttl_queue"/>
            <rabbit:binding key="dlx_max" queue="test_max_queue"/>
        </rabbit:bindings>
    </rabbit:direct-exchange>

    <!--Declare test expired message queues-->
    <rabbit:queue name="test_ttl_queue">
        <rabbit:queue-arguments>
            <!--1.Set the expiration time of the queue TTL-->
            <entry key="x-message-ttl" value-type="long" value="6000"/>
            <!--2.If the message times out, deliver the message to the dead letter switch-->
            <entry key="x-dead-letter-exchange" value="dlx_exchange"/>
        </rabbit:queue-arguments>
    </rabbit:queue>

    <!--Claim test message queue exceeding length-->
    <rabbit:queue name="test_max_queue">
        <rabbit:queue-arguments>
            <!--1.Set the rated queue length(This queue can hold up to 2 messages)-->
            <entry key="x-max-length" value-type="long" value="2"/>
            <!--2.If the message exceeds the division length, deliver the message to the dead letter switch-->
            <entry key="x-dead-letter-exchange" value="dlx_exchange"/>
        </rabbit:queue-arguments>
    </rabbit:queue>

</beans>

Send a message to test

public class SenderDLX {

    public static void main(String[] args) {

        // 1. Create spring container
        ClassPathXmlApplicationContext context = new ClassPathXmlApplicationContext("spring/spring-rabbitmq-producer-dlx.xml");

        // 2. Get the rabbit template object from the container
        RabbitTemplate rabbitTemplate = context.getBean(RabbitTemplate.class);

        // 3. Send message
        //rabbitTemplate.convertAndSend("dlx_ttl", "test timeout". getBytes());
        
        rabbitTemplate.convertAndSend("dlx_max","Test length 1".getBytes());
        rabbitTemplate.convertAndSend("dlx_max","Test length 2".getBytes());
        rabbitTemplate.convertAndSend("dlx_max","Test length 3".getBytes());

        System.out.println("Message sent...");

        context.close();
    }
}

2.10 delay queue

Delay queue: a combination of TTL + dead letter queue

The dead letter queue is just a special queue in which messages can still be consumed

In the part of e-commerce development, delayed order closing will be involved. At this time, delayed queue can solve this problem

2.10.1 producers

Follow the timeout test of the dead letter queue case above, and change the timeout time to the order closing time

public class SenderDLX {

    public static void main(String[] args) {

        // 1. Create spring container
        ClassPathXmlApplicationContext context = new ClassPathXmlApplicationContext("spring/spring-rabbitmq-producer-dlx.xml");

        // 2. Get the rabbit template object from the container
        RabbitTemplate rabbitTemplate = context.getBean(RabbitTemplate.class);

        // 3. Send message
        rabbitTemplate.convertAndSend("dlx_ttl","Timeout, close order".getBytes());

        System.out.println("Message sent...");

        context.close();
    }
}
2.10.2 consumers

spring-rabbitmq-consumer.xml

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:rabbit="http://www.springframework.org/schema/rabbit"
       xmlns:context="http://www.springframework.org/schema/context"
       xsi:schemaLocation="
                           http://www.springframework.org/schema/beans
                           http://www.springframework.org/schema/beans/spring-beans.xsd
                           http://www.springframework.org/schema/rabbit
                           http://www.springframework.org/schema/rabbit/spring-rabbit.xsd
                           http://www.springframework.org/schema/context
                           http://www.springframework.org/schema/context/spring-context.xsd">

    <!--1.Configure connection factory-->
    <rabbit:connection-factory id="connectionFactory" host="192.168.44.129" port="5672" username="wei" password="123123" virtual-host="/lagou"/>

    <!--3.to configure rabbitAdmin:Mainly used in java The queue management in the code is used to create, bind, delete queues and switches, send messages, etc-->
    <rabbit:admin connection-factory="connectionFactory"/>

    <!--4..springIOC Annotation scan package-->
    <context:component-scan base-package="listener"/>

    <!--5.Listen for dead letter queue-->
    <rabbit:listener-container connection-factory="connectionFactory" prefetch="3" acknowledge="manual">
        <rabbit:listener ref="consumerListener" queue-names="dlx_queue" />
    </rabbit:listener-container>

</beans>
@Component
public class ConsumerListener extends AbstractAdaptableMessageListener {

    // Jackson provides the most used classes in serialization and deserialization to convert json
    private static final ObjectMapper MEPPER = new ObjectMapper();

    @Override
    public void onMessage(Message message, Channel channel) throws Exception {
        try {
            String str = new String(message.getBody());
            System.out.println("str = " + str);
            
            // Manual confirmation message (parameter 1, parameter 2)
            /**
             * Parameter 1: the unique ID of the message delivered by RabbitMQ to the channel. This ID is a monotonically increasing positive integer
             * Parameter 2: in order to reduce network traffic, manual confirmation can be processed in batch. When this parameter is true, all messages less than or equal to msgId value can be confirmed at one time
             */
            /*
            long msgId = message.getMessageProperties().getDeliveryTag();
            channel.basicAck(msgId,true);
            
            Thread.sleep(3000);
            System.out.println("After 3 seconds of rest, continue to receive messages! ");
            */
        } catch (Exception e) {
            e.printStackTrace();
        }
    }
}

3.RabbitMQ cluster

rabbitmq has three modes, but the cluster mode is two. Details are as follows:

Single mode: the first mock exam is to run a single rabbitmq without clustering.

Normal mode: the default mode. Take two nodes (A and B) as examples

When the message enters the Queue of node A and the consumer consumes from node B, RabbitMQ will create a temporary channel between a and B for message transmission, take out the message entity in a and send it to B through the channel to the consumer

When A fails, B cannot get the message entity not consumed in node A

If the message is persisted, it can only be consumed after node A recovers

If there is no persistence, there will be message loss

Mirror mode: the classic mirror mode ensures 100% data loss.

The high reliability solution is mainly to realize data synchronization. Generally speaking, 2 - 3 nodes realize data synchronization

For 100% data reliability solutions, three nodes are generally used.

It is also used most in practical work, and the implementation is very simple. Generally, large Internet manufacturers will build this image cluster mode

There are also active standby mode, remote mode, multi live mode, etc. This course is not the focus. You can refer to the materials yourself

3.1 cluster construction

Precondition: prepare two linux and install rabbitmq

Cluster steps are as follows:

1. Modify the / etc/hosts mapping file

Server 1:

127.0.0.1 A localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 	  A localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.44.129 A
192.168.44.130 B

Server 2:

127.0.0.1 B localhost localhost.localdomain localhost4 localhost4.localdomain4
::1		  B localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.44.129 A
192.168.44.130 B

Modify the hosts file. To restart the server, reboot

2. Communicate with each other. The cookies must be consistent. Synchronize the cookies of rabbitmq: copy. Erlang. Cookies across servers (hidden files, displayed using ls -all)

[root@A opt]# scp /var/lib/rabbitmq/.erlang.cookie 192.168.44.130:/var/lib/rabbitmq

To modify the cookie file, reboot the server

3. Stop the firewall and start rabbitmq service

[root@A ~]# systemctl stop firewalld
[root@A ~]# systemctl start rabbitmq-server

4. Join the cluster node

[root@B ~]# rabbitmqctl stop_app
[root@B ~]# rabbitmqctl join_cluster rabbit@A
[root@B ~]# rabbitmqctl start_app

5. View node status

[root@B ~]# rabbitmqctl cluster_status

6. View the management side

After the cluster structure is built, the switches, queues and users previously created belong to a single structure and cannot be used in the new cluster environment

Therefore, you can manually add users again in the new cluster (any node can be added and all nodes can share)

[root@A ~]# rabbitmqctl add_user laosun 123123
[root@A ~]# rabbitmqctl set_user_tags laosun administrator
[root@A ~]# rabbitmqctl set_permissions -p "/" laosun ".*" ".*" ".*"

Note: when the node is separated from the cluster and restored to a single structure, the switch, queue, user and other data will come back

At this time, the cluster is set up, but the default mode is "normal mode", which is not reliable

3.2 mirror mode

Set all queues as mirror queues, that is, the queues will be copied to each node, and the status of each node is consistent

Syntax: set_policy {name} {pattern} {definition}

Name: policy name, customizable

Pattern: the matching pattern of the queue (regular expression)

"^" can use regular expressions, such as "^ queue"_ "Means to mirror all queues whose queue name starts with" queue_ ", while" ^ "means to match all queues

Definition: image definition, including three parts: Ha mode, ha params, and ha sync mode

Ha mode: (High Available) indicates the mode of the image queue. The valid value is all/exactly/nodes. The current policy mode is all, that is, copy to all nodes, including new nodes

All: indicates that all nodes in the cluster are mirrored

exactly: indicates that mirroring is performed on a specified number of nodes. The number of nodes is specified by HA params

nodes: indicates that mirroring is performed on the specified node, and the node name is specified through ha params

Ha params: parameters required for HA mode mode

Ha sync mode: the synchronization method of messages in the queue. The valid values are automatic and manual

[root@A ~]# rabbitmqctl set_policy xall "^" '{"ha-mode":"all"}'

Set the mirroring policy through the management side

3.3 HAProxy realizes load balancing of image queue

Although we access server a in the program, we can synchronize messages. Although we are synchronizing, server a is receiving messages. A is too tired

Can I do load balancing like Nginx, A and B receive messages in turn, and then mirror and synchronize

3.3.1 introduction to haproxy

HA (high availability), Proxy (Proxy)

HAProxy is a proxy software that provides high availability, load balancing, and TCP and HTTP based applications

HAProxy is completely free

HAProxy can support tens of thousands of concurrent connections

HAProxy can be easily and safely integrated into the architecture, while protecting the web server from being exposed to the network

[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-yw0rbcxi-163305745158) (E: \ markdown \ hook notes \ RabbitMQ HAProxy introduction)]

3.3.2 HAProxy and Nginx

OSI: (Open System Interconnection: open system interconnection divides the work of network communication into seven layers: physical layer, data link layer, network layer, transport layer, session layer, presentation layer and application layer)

Advantages of Nginx:

Working in OSI layer 7, you can make some diversion strategies for http applications

Nginx relies very little on the network. In theory, it can perform the load function if it can ping. It has an absolute advantage so far

The installation and configuration of Nginx are relatively simple and easy to test;

Nginx is not only an excellent load balancer / reverse proxy software, but also a powerful Web application server

Advantages of HAProxy:

It works in layer 4 and layer 7 of the network and supports TCP and Http protocols

It is just a load balancing software; In terms of efficiency, HAProxy has better load balancing speed than Nginx, and is also better than Nginx in concurrent processing

It supports 8 load balancing strategies and heartbeat detection

HA wins in performance and Nginx wins in functionality and convenience

For Http protocol, the processing efficiency of Haproxy is higher than that of Nginx. Therefore, when there are no special requirements or general scenarios, it is recommended to use Haproxy to load the Http protocol

However, if it is a Web application, it is recommended to use Nginx!

In short, you can make reasonable choices in combination with the characteristics of their respective use scenarios

3.3.3 installation and configuration

HAProxy Download: http://www.haproxy.org/download/1.8/src/haproxy-1.8.12.tar.gz

decompression

[root@localhost opt]# tar -zxvf haproxy-1.8.12.tar.gz

When making, you need to use TARGET to specify the kernel and version

[root@localhost opt]# uname -r
3.10.0-229.el7.x86_64

Select the compilation parameters according to the kernel version:

[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-b2kczhax-163305745159) (E: \ markdown \ hook notes \ RabbitMQ HAProxy installation)]

Enter the directory, compile and install

[root@localhost opt]# cd haproxy-1.8.12
[root@localhost haproxy-1.8.12]# make TARGET=linux2628 PREFIX=/usr/local/haproxy
[root@localhost haproxy-1.8.12]# make install PREFIX=/usr/local/haproxy

After the installation is successful, view the version

[root@localhost haproxy-1.8.12]# /usr/local/haproxy/sbin/haproxy -v

Configure the startup file, copy the haproxy file to / usr/sbin, and copy the haproxy script to / etc/init.d

[root@localhost haproxy-1.8.12]# cp /usr/local/haproxy/sbin/haproxy /usr/sbin/
[root@localhost haproxy-1.8.12]# cp ./examples/haproxy.init /etc/init.d/haproxy
[root@localhost haproxy-1.8.12]# chmod 755 /etc/init.d/haproxy

Create system account

[root@localhost haproxy-1.8.12]# useradd -r haproxy

The haproxy.cfg configuration file needs to be created by yourself

[root@localhost haproxy-1.8.12]# mkdir /etc/haproxy
[root@localhost haproxy-1.8.12]# vim /etc/haproxy/haproxy.cfg

Add configuration information to haproxy.cfg

#Global configuration
global
    #Set log
    log 127.0.0.1 local0 info
    #Current working directory
    chroot /usr/local/haproxy
    #Users and user groups
    user haproxy
    group haproxy
    #Run process ID
    uid 99
    gid 99
    #Daemon Start
    daemon
    #maximum connection
    maxconn 4096
    
#Default configuration
defaults
    #Apply global log configuration
    log global
    #The default mode is {TCP | http | health}. TCP is layer 4 and HTTP is layer 7. Health only returns OK
    mode tcp
    #Log category tcplog
    option tcplog
    #Do not record health check log information
    option dontlognull
    #The service is considered unavailable after 3 failures
    retries 3
    #Maximum number of connections available per process
    maxconn 2000
    #connection timed out
    timeout connect 5s
    #When the client times out for 30 seconds, ha will initiate a reconnect
    timeout client 30s
    #When the server times out for 15 seconds, ha will initiate a reconnect
    timeout server 15s

#Binding configuration
listen rabbitmq_cluster
    bind 192.168.44.131:5672
    #Configure TCP mode
    mode tcp
    #Simple polling
    balance roundrobin
    #RabbitMQ cluster node configuration. Check the mq cluster every 5 seconds. Two times correctly prove that the service is available, and three times fail to prove that the service is unavailable
    server A 192.168.44.129:5672 check inter 5000 rise 2 fall 3
    server B 192.168.44.130:5672 check inter 5000 rise 2 fall 3

#haproxy monitoring page address
listen monitor
    bind 192.168.44.131:8100
    mode http
    option httplog
    stats enable
    # Monitoring page address http://192.168.44.131:8100/monitor
    stats uri /monitor
    stats refresh 5s

Start HAProxy

[root@localhost haproxy]# service haproxy start

Access monitoring center: http://192.168.44.131:8100/monitor

Remember to turn off the firewall: systemctl stop firewalld

When sending messages to the project, you only need to change the server address to 131, and the rest remain unchanged

All requests are sent to HAProxy, and its load is balanced to each rabbitmq server

3.4 KeepAlived builds a highly available HAProxy cluster

Now the last problem is exposed. If the HAProxy server goes down, the rabbitmq server will not be available. Therefore, we need to do high availability clustering for HAProxy

3.4.1 general

Keepalived is the next lightweight high availability hot standby solution for Linux

The function of Keepalived is to detect the state of the server. It detects the state of each service node according to the layer 3, layer 4 and layer 5 switching mechanisms of the TCP/IP reference model. If a web server goes down or fails, Keepalived will detect and remove the failed server from the system, and use other servers to replace the work of the server, When the server works normally, kept automatically adds the server to the server cluster. All these tasks are completed automatically without manual intervention. What needs to be done manually is to repair the faulty server.

Kept is based on VRRP (Virtual Router Redundancy Protocol) protocol. VRRP is a protocol in active and standby (host and standby) mode. Through VRRP, device switching can be carried out transparently in case of network failure without affecting the data communication between hosts

A virtual ip is generated between the two hosts, which is called drift ip. The drift ip is borne by the primary server. Once the primary server goes down, the backup server will grab the drift ip and continue to work, effectively solving the single point of failure in the cluster

To put it bluntly, virtualize multiple router devices into one device and provide unified ip (VIP) to the outside world

[the external chain image transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-e3tbasni-163305745161) (E: \ markdown \ hook notes \ RabbitMQ KeepAlived to build a highly available HAProxy cluster)]

3.4.2 installation of KeepAlived

Modify the address mapping of the hosts file

ippurposehost name
192.168.44.131KeepAlived HAProxyC
192.168.44.132KeepAlived HAProxyD

Install keepalived

[root@C ~]# yum install -y keepalived

Modify the configuration file (if the content is greatly changed, it's better to delete and recreate it)

[root@C ~]# rm -rf /etc/keepalived/keepalived.conf
[root@C ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
    
global_defs {
	router_id C ## It is very important to identify the hostname of this machine
}

vrrp_script chk_haproxy{
    script "/etc/keepalived/haproxy_check.sh" ## Script location to execute
        interval 2 	## Detection interval
        weight -20 	## If the condition holds, the weight is reduced by 20
}

vrrp_instance VI_1 {
    state MASTER 			## It is very important to identify the host and change the standby machine 132 to BACKUP
    interface eno16777736 	## Very important, network card name (ifconfig view)
    virtual_router_id 66 	## It is very important to customize the virtual route ID number (the primary and standby nodes should be the same)
    priority 100 			## Priority (0-254). Generally, the priority of the host is greater than that of the standby
    advert_int 1 			## The sending interval of active and standby information must be the same between the two nodes. The default is 1 second
    authentication { 		## Authentication matching, set authentication type and password. MASTER and BACKUP must use the same password to communicate normally
	    auth_type PASS
    	auth_pass 1111
    }
    track_script {
        chk_haproxy 		## Script to check the health of haproxy
    }
    virtual_ipaddress { 	## Referred to as "VIP"
        192.168.44.66/24 	## Very important, you can specify multiple virtual ip addresses. You can use this virtual ip when connecting to mq in the future
        }
}

virtual_server 192.168.44.66 5672 { 	## Detailed configuration of virtual ip
    delay_loop 6 						# Health check interval in seconds
    lb_algo rr 							# lvs scheduling algorithm rr|wrr|lc|wlc|lblc|sh|dh
    lb_kind NAT 						# Load balancing forwarding rules. It generally includes Dr, NAT and tun
    protocol TCP 						# There are two forwarding protocols, TCP and UDP. TCP is generally used
    real_server 192.168.44.131 5672 { 	## Real ip address of the machine
        weight 1 						# The default is 1, and 0 is invalid
    }
}

Create and execute script / etc / kept / haproxy_ check.sh

#!/bin/bash
COUNT=`ps -C haproxy --no-header |wc -l`
if [ $COUNT -eq 0 ];then
	/usr/local/haproxy/sbin/haproxy -f /etc/haproxy/haproxy.cfg
	sleep 2
	if [ `ps -C haproxy --no-header |wc -l` -eq 0 ];then
		killall keepalived
	fi
fi

The heartbeat check between the Keepalived groups does not detect whether the HAproxy load is normal, so this script needs to be used.

On the Keepalived host, start this script to check whether HAproxy works normally. If it works normally, record the log.

If the process does not exist, try to restart HAproxy and detect it after 2 seconds. If not, turn off the main Keepalived. At this time, the standby Keepalived detects that the main Keepalived is hung up, takes over the VIP and continues the service

Authorization, otherwise it cannot be executed

[root@C etc]# chmod +x /etc/keepalived/haproxy_check.sh

Start keepalived (both start)

[root@C etc]# systemctl stop firewalld
[root@C etc]# service keepalived start | stop | status | restart

View status

[root@C etc]# ps -ef | grep haproxy
[root@C etc]# ps -ef | grep keepalived

Check ip addr or ip a

[root@C etc]# ip a

[the external chain image transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-wvwk8vhl-163305745162) (E: \ markdown \ hook notes \ RabbitMQ KeepAlived to build a highly available HAProxy cluster 02)]

At this point, after the installation is completed, you can install the second server according to the above steps (note to modify the server hostname and ip)

Common network errors: the subnet mask, gateway and other information should be consistent

[the external chain image transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-suxvez4g-163305745163) (E: \ markdown \ hook notes \ RabbitMQ KeepAlived to build a highly available HAProxy cluster 03)]

3.4.3 rules for testing ip drift

View virtual IP addr or ip a

At present, node C is the host, so the virtual ip is on node C

[the external chain image transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-0vav81qq-163305745166) (E: \ markdown \ hook notes \ RabbitMQ KeepAlived build a highly available HAProxy Cluster - Rules for testing ip drift)]

Stop the keepalived of C, and the virtual ip drifts to node D

[the external chain image transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-2bzdwzxk-163305745167) (E: \ markdown \ tick notes \ RabbitMQ KeepAlived build a highly available HAProxy Cluster - rule 02 for testing ip drift)]

Restart node C keepalived. The virtual ip is still in node D and will not return due to the return of C

[the external chain image transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-fzgce2wd-163305745168) (E: \ markdown \ tick notes \ RabbitMQ KeepAlived build a highly available HAProxy Cluster - rule 03 for testing ip drift)]

Stop D's keepalived, and the virtual ip will drift back to node C

[the external chain image transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-8noq7l4o-163305745170) (E: \ markdown \ tick notes \ RabbitMQ KeepAlived build a highly available HAProxy Cluster - rule 04 for testing ip drift)]

Tags: RabbitMQ Redis Distribution FastDFS

Posted on Thu, 30 Sep 2021 16:09:44 -0400 by jason102178