Perform RabbitMQ installation, cluster building, image queue configuration and code verification from scratch

Preface

I don't know what to say, just start. Originally, if you wanted to use the latest version, you had to download the rpm of the specified version of rabbitmq as soon as you think that the production and test must be consistent and cannot be upgraded casually.

RabbitMQ concept

Broker: service node of message middleware, a service instance of RabbitMQ, can also be regarded as a server of RabbitMQ

Queue queue: used to store messages. kafka is different. Its messages exist at the logic level of topic, while the queue only stores the compilation ID of the actual storage file in topic. Multiple consumers can subscribe to a queue at the same time and process messages by round robin polling

Exchange switch: producers send messages to the switch, which routes them to one or more queues

  • The corresponding routingkey will be set when Binding direct exchange and queue. When the producer sends a message to the switch, the corresponding routingkey will be set. If the two routingkeys are the same, the message will be put on the bound queue.

  • topic is the same as direct, but supports the wildcard pattern of routingkey. There can be wildcards: *, ා. Where * means to match one word and ා means to match no or more words

  • fanout routes messages sent to the switch directly to one or more queues it binds

  • Header to judge according to the added header

    • x-match = = all, match all header s

    • x-match = = any, only need to match the value of one of the header s

RoutingKey routing key: when a producer sends a message to a switch, it usually specifies a RoutingKey, which is used to specify the routing rules of the message. The RoutingKey needs to be used in combination with the switch type and binding key to finally work. When the switch type and binding key are fixed, the producer can determine where the message flows by specifying RoutingKey when sending messages to the switch

Binding key binding: the switch is associated with the queue through binding. When binding, a binding key is usually specified, so RabbitMQ knows how to correctly route messages to the queue. BindingKey is only valid for a specific switch.

Producer: Message producer

Consumer: message consumer

Installation conditions

Environmental Science

Centos 7.4 3 virtual machines 8c16g

User privileges

sudo permission required

install files

The downloaded files are unified in the directory / home/lazasha/download. For the version relationship between rabbitmq and erlang, please refer to:

https://www.rabbitmq.com/which-erlang.html

epel: epel-release-7-12.noarch.rpm

Download address: https://mirrors.tuna.tsinghua.edu.cn/epel/7/x86_/packages/e/epel-release-7-12.noarch.rpm

erlang:erlang-22.1.8-1.el7.x86_64.rpm

Download address: https://github.com/rabbitmq/erlang-rpm/releases

rabbitmq: rabbitmq-server-3.8.2-1.el7.noarch.rpm

Download address: https://dl.bintray.com/rabbitmq/all/rabbitmq-server/3.8.2/

Key: rabbitmq-release-signing-key.asc

Download address: https://github.com/rabbitmq/signing-keys/releases

step

epel installation

sudo yum -y install epel-release-7-12.noarch.rpm

erlang installation

sudo yum -y install erlang-22.1.8-1.el7.x86_64.rpm

 

Check for successful installation:

Input: erl

 

rabbitmq installation

sudo yum -y install rabbitmq-server-3.8.2-1.el7.noarch.rpm 

 

Verify success:

sudo systemctl start rabbitmq-server 
sudo systemctl status rabbitmq-server

 

Out of Service:

sudo systemctl stop rabbitmq-server

Operate the same on his two machines. The default service port is 5672

Cluster building

Add the corresponding IP and node name in the / etc/hosts file on three machines

10.156.13.92 lchod1392
10.156.13.93 lchod1393
10.156.13.94 lchod1394

Assign the cookie file on lchod1392 to lchod1393 and lchod1394 nodes, and the cookies of each node in the cluster environment must be consistent. The default path of the cookie file installed by rpm is / var/lib/rabbitmq/.erlang.cookie

Note:. erlang.cookie may have permission problems. You can use the following operations:

sudo chmod -R 600 /var/lib/rabbitmq/.erlang.cookie

Note: after copying to the other two machines, execute the following command anyway, and change the owner of. erlang.cookie:

sudo chown -R rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie

 

Use Rabbitmqctl to configure the cluster. The internal communication port of the cluster is 25672

1. Start the RabbitMQ service on three nodes first

sudo systemctl start rabbitmq-server

You can use rabbitmqctl cluster? Status to view the cluster status of each node

2. based on lchod1392, add lchod1393 and lchod1394 to the cluster, and set all 3 nodes as hard disk nodes.

lchod1393:

    sudo rabbitmqctl stop_app                    //Only rabbitmq service is closed, not erlang service
    sudo rabbitmqctl reset                       //I did not execute this command when adding cluster
    sudo rabbitmqctl join_cluster rabbit@lchod1392   //--ram is the memory node mode, not the hard disk node
    sudo rabbitmqctl start_app

lchod1394:

    sudo rabbitmqctl stop_app                    //Only rabbitmq service is closed, not erlang service
    sudo rabbitmqctl reset                       //I did not execute this command when adding cluster
    sudo rabbitmqctl join_cluster rabbit@lchod1392   //--ram is the memory node mode, not the hard disk node
    sudo rabbitmqctl start_app

 

3. Check the cluster status

sudo rabbitmqctl cluster_status

 

Note: if all nodes in the cluster are shut down, make sure that the last shut down node starts the first, otherwise there will be a problem.

Create remote access user

sudo rabbitmqctl add_user rabbitmq ******
sudo rabbitmqctl set_user_tags rabbitmq administrator
sudo rabbitmqctl set_permissions -p "/" rabbitmq ".*" ".*" ".*"
//View new users
sudo rabbitmqctl list_users

 

Note: do not start the background management plug-in. Use systemctl start rabbitmq server to start it. The port is 15672

Mirror Queue building

For each image queue contains a master node and multiple slave nodes, it is necessary to ensure that the master nodes of the queue are evenly distributed in each broker of the cluster. If the master does not work, the first salve of the image queue is upgraded to master

The configuration of the image queue is mainly completed by adding the corresponding Policy:

rabbitmqctl set_policy [-p vhost) [--priority
priority) [--apply- to apply- to) {name) {pattern) {definition)

The definition includes ha mode, ha params, ha sync mode

  • Ha mode indicates the mode of the image queue. The valid value is all/exactly/nodes. The default value is all
    All indicates mirroring on all nodes in the cluster
    exactly indicates to mirror on the specified number of nodes, the number of nodes is specified by HA params;
    nodes means to mirror on the specified node. The name of the node is specified by HA params. The name of the node is usually similar to rabbit@hostname, which can be viewed through the rabbitmqctl cluster status command

  • Ha params parameters needed in different hamode configurations.

  • Ha sync mode is the synchronization mode of messages in the queue. The valid values are automatic and manual

 

Command sample

  • Mirror all queues whose queue name begins with queue "and complete the mirroring on two nodes of the cluster

    rabbitmqctl set_policy --priority 0 --apply-to queues mirror_queue " ^queue_"
    ' {"ha-mode ":"exactly","ha-params ": 2, "ha-sync-mode ": "automatic" }'
    
  • Mirror all queues whose queue name begins with queue "and complete the mirroring on all nodes of the cluster

    rabbitmqctl set_policy --priority 0 --apply-to queues mirror_queue " ^queue_"
    ' {"ha-mode ":"all","ha-sync-mode ": "automatic" }'
    

    Rabbitmqctl set [policy ha all "^" '{"Ha mode": "all"}' can set the queue as a mirror queue

Command execution

   sudo rabbitmqctl set_policy --priority 0 --apply-to queues mirror_queue " ^queue_"
   ' {"ha-mode ":"all","ha-sync-mode ": "automatic" }'

Verification

Use the new rabbitmq user to log in to the remote machine from the local machine

lchod1392: create a queue, starting with a queue

lchod1393: this queue already exists

lchod1394: with this queue

Queue knowledge

Two parameters in the mandatory and immediate parameter channel.basicPublish methods

  • When the mandatory parameter mandatory parameter is set to true, the switch cannot find a qualified queue according to its own type and routing key. RabbitMQ will call the Basic.Return command to return the message to the producer. When the number of mandatory is set to false, the message will be discarded directly. Then how can the producer get the messages that are not correctly routed to the appropriate queue? At this time, the ReturnListener monitor can be added by calling channel.addReturnListener.

  • When the immediate parameter immediate parameter is set to true, if the switch finds that there are no consumers on the queue when routing the message to the queue, the message will not be put into the queue. When all queues matching the routing key have no consumers, the message is returned to the producer through Basic.Return.

  • In summary, the mandatory parameter tells the server to route the message to at least one queue and return the message to the producer. The imrneditate parameter tells the server to deliver the message immediately if there is a consumer on the queue associated with the message; if there is no consumer on all matching queues, the message will be returned directly to the producer, and the message will not be put into the queue and wait for the consumer.

  • RabbitMQ version 3.0 began to remove the support for immediate parameters. The official explanation of RabbitMQ is that immediate parameters will affect the performance of the image queue and increase the code complexity. It is recommended to adopt the methods of TTL and DLX

TTL time to live

  • Setting method: through the setting of queue properties, messages in the whole queue have the same expiration time; you can also set a single message separately, so messages in a queue have different expiration time. If both are set, whichever is less

  • Set TTL code of queue message

    Map<String,Object> argss = new HashMap<String, Object>();
    argss.put("x-message-ttl " , 5000);
    channel.queueDeclare(queueName , durable , exclusive , autoDelete , argss) ; 
    

    In this way, once the message expires, it will be erased from the queue

    The method to set TTL for each message is to add the attribute parameter of expiration in the channel.basicPublish method, with the unit of milliseconds:

    AMQP.BasicProperties.Builder builder = new AMQP.BasicProperties.Builder();
    builder deliveryMode(2); Persistent message
    builder expiration( 50000 );/ Set up TTL=50000ms
    AMQP.BasicProperties properties = builder. build() ;
    channel.basicPublish(exchangeName , routingKey, mandatory, properties,
    "test ttl".getBytes());
    

    In this way, even if the message expires, it will not be erased from the queue immediately, because whether each message expires is determined before it is delivered to the consumer

  • If TTL. Is not set, the message will not expire; if TTL is set to 0, the message will be discarded immediately unless it can be delivered directly to the consumer at this time

  • Set TTL of queue

    The expires parameter in the channel.queueDeclare method allows you to control how long a queue is unused before it is automatically deleted. Unused means that there are no consumers on the queue, the queue has not been redeclared, and the Basic.Get command has not been called in the expiration period.

    Map<String , Object> args =mouth ew HashMap<String, Object>{) ;
    args . put( "x-expires" , 100000); 
    channel . queueDeclare("queuesleb " , false , false , false , args) ; 

Dead letter message when a message becomes a DEA message in a queue, it can be re sent to another switch, which is DLX. The queue bound to DLX is called dead letter queue.

  • The message is rejected (basic. Reject / basic. Na CK), and the request parameter is set to false

  • Message expiration

  • Queue reaches maximum length

  • Consumers can be created to listen to messages in this queue for processing

  • Add DLX to this queue by setting the x-dead-letter-exchange parameter in the channel.queueDeclare method

     

    channel.exchangeDeclare("dlx_exchange " , "direct "); //Create DLX: DLX? Exchange
    Map<String, Object> args = new HashMap<String, Object>();
    args.put("x-dead-letter-exchange" , " dlx-exchange ");
    //Add DLX for queue myqueue
    channel.queueDeclare("myqueue" , false , false , false , args); 

    //You can also specify a routing key for this DLX. If you do not specify a specific key, the routing key of the original queue will be used. If you specify a key, the consumer needs to use the
    //The message of this queue can only be consumed by the routing key of:
    args.put("x-dead-letter-routing-key" , "dlx-routing-key"); 

Delay queue

  • Scenario: an order is valid for payment within 30 minutes, otherwise it will be cancelled automatically

  • Use the above TTL and DLX to achieve the function of delay queue

Priority queue

By setting the x-max-priority parameter of the queue:

    Map<String, Object> args = new HashMap<String, Object>() ;
    args.put( "x-max-priority" , 10) ;
    channel.queueDeclare( "queue.priority" , true , fa1se , false , args) ; 

Only when the producer speed is faster than the consumer speed and there is a backlog of messages in the broker can it work

Persistence

  • The persistence of switch, queue and message can make the real persistence

  • Persistence of switch: set durable = true

  • Queue persistence: durable = true

  • Message persistence: Message persistence can be achieved by setting the delivery mode (deliveryMode property in basicproperties) of the message to 2 (deliveryMode. Persistent)

publisher confirm

Publisher confirms: true ා confirm whether the message reaches the broker server, that is, only confirm whether it reaches the exchange correctly. As long as it reaches the exchange correctly, the broker can confirm that the message is returned to the client

Ackpublisher returns: true ා confirm whether the message reaches the queue correctly, if not, trigger it, if not, do not trigger it

The ConfirmCallback interface is used to receive ack callbacks after messages are sent to the RabbitMQ switch.

    rabbitTemplate.setConfirmCallback((correlationData, ack, cause) -> {
                if (ack) {
                    CorrelationDataEx c = (CorrelationDataEx)correlationData;
                    System.out.println("Send message: " + c.getMsg());
                    System.out.println("HelloSender Message sent successfully :" + correlationData.toString() );
                    /**
                     * By setting the correlationData.id as the business primary key, the candidate business will continue after the message is sent successfully.
                     */
                } else {
                    System.out.println("HelloSender Message sending failed" + cause);
                }
            });

The ReturnCallback interface is used to implement the callback when a message is sent to the RabbitMQ switch, but there is no corresponding queue bound to the switch

     rabbitTemplate.setReturnCallback((message, replyCode, replyText, exchange, routingKey) -> {
                 //Users users1 = (Users)message.getBody().toString();
                 //String correlationId = message.getMessageProperties().getCorrelationId();

                 System.out.println("Message : " + new String(message.getBody()));
                 //System.out.println("Message : " + new String(message.getBody()));
                 System.out.println("replyCode : " + replyCode);
                 System.out.println("replyText : " + replyText);  //Error reason
                 System.out.println("exchange : " + exchange);
                 System.out.println("routingKey : " + routingKey);//queue name

             });

 

     /**
              * CorrelationDataEx Inherit the CorrelationData and add the key fields that need to send messages
              * In this way, confirmallback can return the correlationData with key fields. We can use this to determine which business record is sent
              */
             CorrelationDataEx c = new CorrelationDataEx();
             c.setId(users.getId().toString());
             c.setMsg(users.toString());

             /**
              * With this, you can read the json message sent from the returncallback parameter, otherwise it is binary bytes
              * For example, if returncallback is triggered, it indicates that the message has not been delivered to the queue, then the business operation will continue, such as the message record flag bit is not delivered successfully, and the delivery times will be recorded
              */
             rabbitTemplate.setMessageConverter(new Jackson2JsonMessageConverter());

             rabbitTemplate.convertAndSend(EXCHANGE, QUEUE_TWO_ROUTING, users, c);

News consumption

1. configuration

        listener:
              simple:
                prefetch: 1               #Set to process one message at a time
                acknowledge-mode: manual  #Set manual ack of consumer end
                concurrency: 3            #Set up 3 consumer consumption at the same time, 3 Consumer instances are needed

2. code

        @RabbitHandler
            @RabbitListener(queues = QUEUE_ONE_ROUTING) //containerFactory = "rabbitListenerContainerFactory", concurrency = "2")
            public void process(Users users, Channel channel, Message message) throws IOException {
                System.out.println("HelloReceiver Received  : " + users.toString() + "Receive time" + new Date());

                try {
                    //Tell the server that this message has been consumed by me. It can be deleted in the queue so that it will not be sent again
                    //Otherwise, the message server thinks that this message has not been processed and will be sent later
                    channel.basicAck(message.getMessageProperties().getDeliveryTag(), false);
                    System.out.println("receiver success");
                } catch (IOException e) {
                    e.printStackTrace();
                    //If you discard this message, it will not be sent again
                    //channel.basicNack(message.getMessageProperties().getDeliveryTag(), false, false);
                    System.out.println("receiver fail");
                }
            }

Verification

Create message producers and consumers

Producer

Cluster configuration:

    spring:
      application:
        name: rabbitmq-producer-demo
      rabbitmq:
        #Single point configuration
        #host: localhost
        #port: 5672
        #Cluster configuration
        addresses: 10.156.13.92:5672,10.156.13.93:5672,10.156.13.94:5672
        username: rabbitmq  #guest is the default and can only be accessed by localhost network. To access the remote network, users need to be created
        password: 123456
        #mysql, for example, has the concept of a database and can specify the user's permissions for operations such as databases and tables. What about RabbitMQ? RabbitMQ has similar rights management.
        #In RabbitMQ, virtual message server VirtualHost can be used. Each VirtualHost is equivalent to a relatively independent RabbitMQ server,
        #Each VirtualHost is isolated from each other. exchange, queue and message cannot communicate with each other. Equivalent to mysql db.
        #Virtual Name usually starts with /
        virtual-host: /
        #Confirm whether the message arrives at the queue correctly. If not, it will be triggered. If so, it will not be triggered
        publisher-returns: on
        #Confirm whether the message reaches the broker server, that is, only confirm whether it reaches the exchange correctly,
        #As long as it reaches the exchange correctly, the broker can confirm that the message is returned to the client ack
        #If it is simple, it will not callback
        publisher-confirm-type: correlated
        template:
          #When on is set, the consumer will be listened to by return when the message is not routed to the appropriate queue, and will not be deleted automatically
          mandatory: on

Queue settings: queue? Sleb? Accept queue is set

    @Configuration
    public class RabbitConfig {
        /**
         * Name of insured message exchange
         */
        public static final String EXCHANGE_SLEB_ACCEPT = "exchange_sleb_accept";

        /**
         * Insurance message queue
         */
        public static final String QUEUE_SLEB_ACCEPT = "queue_sleb_accept";
        /**
         * Insurance message routing key
         */
        public static final String ROUTING_KEY_ACCEPT = "routing_key_accept";
        /**
         *  Insurance message dead letter switch
         */
        public static final String DLX_EXCHANGE_SLEB_ACCEPT = "exchange_dlx_sleb_accept";
        /**
         * Insurance message dead letter queue
         */
        public static final String DLX_QUEUE_SLEB_ACCEPT = "queue_dlx_sleb_accept";
        /**
         *  Common exchanger types are as follows:
         *       Direct(DirectExchange): direct The behavior of type is "match first, then send"
         *       That is to say, a routing key is set during binding. Only when the routing key of the message matches perfectly, can it be sent to the bound queue by the switch.
         *       Topic(TopicExchange): Forward messages by rules (most flexible).
         *       Headers(HeadersExchange): Set the switch of the header attribute parameter type.
         *       Fanout(FanoutExchange): Forward messages to all binding queues.
         *
         * All of the following are direct. exchange and queue must be strictly matched
         * Insurance message exchange
         */
        @Bean("slebAcceptExchange")
        DirectExchange slebAcceptExchange() {
            return ExchangeBuilder.directExchange(EXCHANGE_SLEB_ACCEPT).durable(true).build();

        }
        /**
         * The second parameter is durable: whether to persist. If true, this kind of queue is called Durable queues. This queue will be stored on disk,
         *                 When the message broker restarts, it still exists. Queues that are not persisted are called Transient queues.
         * The third parameter, execute: indicates that this correspondence can only be used by the currently created connection, and the queue will be deleted when the connection is closed. This reference takes precedence over dual
         * The fourth parameter autoDelete: when no producer / consumer uses this queue, it will be automatically deleted. (i.e. deleted when the last consumer unsubscribes)
         *
         * Here is (queue) queue persistence (durable=true), and exchange also needs persistence
         * ********************Dead letter queue**********************************************************
         *            x-dead-letter-exchange    The dead letter switch bound to the current queue is declared here
         *            x-dead-letter-routing-key  The dead letter route key of the current queue is declared here
         *            The following parameters are used only when dead letter queue is used
         *            Map<String, Object> args = new HashMap<>(2);
         *            args.put("x-dead-letter-exchange", DLX_EXCHANGE_SLEB_ACCEPT);
         *            args.put("x-dead-letter-routing-key", ROUTING_KEY_ACCEPT);
         *            return QueueBuilder.durable(QUEUE_SLEB_ACCEPT).withArguments(args).build();
         * ********************Dead letter queue**********************************************************
         * Insurance message queue
         */
        @Bean("slebAcceptQueue")
        public Queue slebAcceptQueue() {
            return QueueBuilder.durable(QUEUE_SLEB_ACCEPT).build();
        }

        /**
         * Switch, queue, binding
         */
        @Bean("bindingSlebAcceptExchange")
        Binding bindingSlebAcceptExchange(@Qualifier("slebAcceptQueue") Queue queue,
                                          @Qualifier("slebAcceptExchange") DirectExchange directExchange) {
            return BindingBuilder.bind(queue).to(directExchange).with(ROUTING_KEY_ACCEPT);
        }
        /**
         * Insurance dead letter exchange
         */
        @Bean("slebDlxAcceptExchange")
        DirectExchange slebDlxAcceptExchange() {
            return ExchangeBuilder.directExchange(DLX_EXCHANGE_SLEB_ACCEPT).durable(true).build();
        }
        /**
         * Insurance dead letter queue
         */
        @Bean("slebDlxAcceptQueue")
        public Queue slebDlxAcceptQueue() {
            return QueueBuilder.durable(DLX_QUEUE_SLEB_ACCEPT).build();
        }
        /**
         * Dead letter switch, queue, binding
         */
        @Bean("bindingDlxSlebAcceptExchange")
        Binding bindingDlxSlebAcceptExchange(@Qualifier("slebDlxAcceptQueue") Queue     queue, @Qualifier("slebDlxAcceptExchange") DirectExchange directExchange) {
            return BindingBuilder.bind(queue).to(directExchange).with(ROUTING_KEY_ACCEPT);
        }

Production news

    @Service
    public class AcceptProducerServiceImpl implements AcceptProducerService {
        private final Logger logger = LoggerFactory.getLogger(AcceptProducerServiceImpl.class);


        private final RabbitTemplate rabbitTemplate;

        public AcceptProducerServiceImpl(RabbitTemplate rabbitTemplate) {
            this.rabbitTemplate = rabbitTemplate;
        }

        @Override
        public void sendMessage(PolicyModal policyModal) {
            logger.info("Start sending time: " + DateUtils.localDateTimeToString(LocalDateTime.now())
                    + ",Policy number: " + policyModal.getPolicyNo()
                    + ",send content: " + policyModal.toString());
            /*
             * policyDataEx Inherit the CorrelationData and add the key fields that need to send messages
             * In this way, confirmallback can return the correlationData with key fields. We can use this to determine which business record is sent
             * policyno Is a unique value
             */
            PolicyDataEx policyDataEx = new PolicyDataEx();
            policyDataEx.setId(policyModal.getPolicyNo());
            policyDataEx.setMessage(policyModal.toString());

            /*
             * With this, you can read the json message sent from the returncallback parameter, otherwise it is binary bytes
             * For example, if returncallback is triggered, it indicates that the message has not been delivered to the queue, then the business operation will continue, such as the message record flag bit is not delivered successfully, and the delivery times will be recorded
             */
            //rabbitTemplate.setMessageConverter(new Jackson2JsonMessageConverter());
            //The fully qualified name (package name + class name) of PolicyModal class will be brought into mq, so the consumer service side must have the same fully qualified name class, otherwise the reception will fail.

            rabbitTemplate.convertAndSend(RabbitConfig.EXCHANGE_SLEB_ACCEPT, RabbitConfig.ROUTING_KEY_ACCEPT, policyModal, policyDataEx);

        }

Operational verification

http://localhost:9020/sendsing

 

View the three server consoles: the image queue has been created and a message is in the queue:

Consumer

To configure

    spring:
      application:
        name: rabbitmq-consumer-demo
      rabbitmq:
        #Single point configuration
        #host: localhost
        #port: 5672
        #Cluster configuration
        addresses: 10.156.13.92:5672,10.156.13.93:5672,10.156.13.94:5672
        username: rabbitmq
        password: 123456
        #mysql, for example, has the concept of a database and can specify the user's permissions for operations such as databases and tables. What about RabbitMQ? RabbitMQ has similar rights management.
        #In RabbitMQ, virtual message server VirtualHost can be used. Each VirtualHost is equivalent to a relatively independent RabbitMQ server,
        #Each VirtualHost is isolated from each other. exchange, queue and message cannot communicate with each other. Equivalent to mysql db.
        #Virtual Name usually starts with /
        virtual-host: /
        listener:
          simple:
            prefetch: 1               #Set to process one message at a time
            acknowledge-mode: manual  #Set manual ack of consumer end
            concurrency: 3            #Set up 3 consumers to consume at the same time
            #Message receiving confirmation, optional modes: NONE, AUTO, MANUAL

Configure the queue name. The primary name is the same as the name in the producer

    public class RabbitMQConfigInfo {
        /**
         * Insurance message queue
         */
        public static final String QUEUE_SLEB_ACCEPT = "queue_sleb_accept";
        /**
         * Name of insured message exchange
         */
        public static final String EXCHANGE_SLEB_ACCEPT = "exchange_sleb_accept";

        /**
         * Insurance message routing key
         */
        public static final String ROUTING_KEY_ACCEPT = "routing_key_accept";
    }

consumption

    @Service
    public class RabbitConsumerServiceImpl implements RabbitConsumerService {

        private final Logger logger = LoggerFactory.getLogger(RabbitConsumerServiceImpl.class);

        @RabbitHandler
        @RabbitListener(bindings = @QueueBinding(
                value = @Queue(name = QUEUE_SLEB_ACCEPT, durable = "true"),
                exchange = @Exchange(name = EXCHANGE_SLEB_ACCEPT,
                        ignoreDeclarationExceptions = "true"),
                key = {ROUTING_KEY_ACCEPT}
        ))
        @Override
        public void process(Channel channel, Message message) throws IOException {
            String jsonStr = new String(message.getBody());
            logger.info("Time of receiving information: " + DateUtils.localDateTimeToString(LocalDateTime.now())
                    + "\n,News:" + jsonStr);
            //The fully qualified name (package name + class name) of PolicyModal class will be brought into mq, so the consumer service side must have the same fully qualified name class, otherwise the reception will fail.
            PolicyModal policyModal = JsonUtils.JSON2Object(jsonStr, PolicyModal.class);
            assert policyModal != null;
            try {
                //Get the body in the message, convert it to PolicyModal, and then get policyno
                //According to the flag in the new database of policyno,
                // todo

                //Tell the server that this message has been consumed by me. It can be deleted in the queue so that it will not be sent again
                //Otherwise, the message server thinks that this message has not been processed and will be sent later
                //throw new IOException("myself");
                channel.basicAck(message.getMessageProperties().getDeliveryTag(), false);
                /*logger.info("Receive processing succeeded: \ n“
                        + "Received message time: "+ DateUtils.localDateTimeToString(LocalDateTime.now())
                        + ",Policy No. "+ policyModal.getPolicyNo()
                        + "\n,Message: "+ new String(message.getBody()));
    */
            } catch (IOException e) {
                e.printStackTrace();
                //If you discard this message, it will not be sent again
                //Generally, it is not discarded. After timeout, mq will automatically go to dead letter queue (if timeout and dead letter switch and queue are set)
                //channel.basicNack(message.getMessageProperties().getDeliveryTag(), false, false);
                logger.info("Receive processing failed:\n"
                        + "Time of receiving information: " + DateUtils.localDateTimeToString(LocalDateTime.now())
                        + ",Policy number: " + policyModal.getPolicyNo()
                        + "\n,News:" + new String(message.getBody()));
            }
        }

    }

Startup validation

In each server console, the message has been consumed, and the message in the queue is 0

End

It's hard to write technical articles. It took about a week. I hope it can help you. If you need to verify the code, please contact me at lazasha@163.com and I will send it to you. Lazy, no time to go to github, come back.

629 original articles published, 324 praised, 300000 visitors+
His message board follow

Tags: RabbitMQ sudo Erlang RPM

Posted on Sun, 12 Jan 2020 06:31:25 -0500 by Marc