preface
This article will analyze ActiveMQ from the aspects of message storage in MQ, removal process, persistent storage timing of persistent messages, which persistent storage methods ActiveMQ supports, how to configure persistent storage methods, protocols and transmission methods ActiveMQ supports, highly available clusters and applicable scenarios
Persistence in ActiveMQ
If the persistence mode is set, the message will be persisted.
Messaging process
Here, it will be stored in activeMQ and an ACK will be returned to confirm that the transmission is successful, and the producer will wait all the time. The message stored is not necessarily non persistent or persistent. If the message is persistent, it needs to be saved to disk before returning ack
After the consumer subscribes, MQ will push the message to the consumer; The message will not be deleted until the consumption is successful
Persistence supported by ActiveMQ
Persistence supported by ActiveMQ
- AMQ
- JDBC ActiveMQ V4 join
- Kahadb ActiveMQ is added in V5.3, and the default persistence mode starts from 5.4
- LevelDB ActiveMQ V5.8 has been officially abandoned and is no longer supported
- Replicated LevelDB Store ActiveMQ V5.8 is officially obsolete and no longer supported
collocation method
<xs:element name="persistenceAdapter" maxOccurs="1" minOccurs="0"> <xs:annotation> <xs:documentation> <![CDATA[ Sets the persistence adaptor implementation to use for this broker]]> </xs:documentation> </xs:annotation> <xs:complexType> <xs:choice maxOccurs="1" minOccurs="0"> <xs:element ref="tns:jdbcPersistenceAdapter"/> <xs:element ref="tns:journalPersistenceAdapter"/> <xs:element ref="tns:kahaDB"/> <xs:element ref="tns:levelDB"/> <xs:element ref="tns:mKahaDB"/> <xs:element ref="tns:memoryPersistenceAdapter"/> <xs:element ref="tns:replicatedLevelDB"/> <xs:any namespace="##other"/> </xs:choice> </xs:complexType> </xs:element>
AMQ
<broker brokerName="broker" > <persistenceAdapter> <amqPersistenceAdapter directory="${activemq.base}/activemq-data" maxFileLength="32mb"/> </persistenceAdapter> </broker>
Data logs are included here: they are written in sequence. For persistence, the logs are put on the disk every time, and the query will be slow. In order to solve the problem of slow, the logs will be put into the cache; Indexing occurs to quickly find logs.
The disadvantage of AMQ is that it will generate a large number of index files, which will be very time-consuming after restarting; Are independent storage; The cache is loaded from the index and is in kahadb format by default
KahaDB
<persistenceAdapter> <kahaDB directory="${activemq.data}/kahadb"/> </persistenceAdapter>
Default build structure
db log files: with db-Incremental number.log Naming. archive directory: When configuration support archiving(Not supported by default)And exists, the folder will be created. Used to store information that is no longer needed data logs. db.data: storage btree Indexes db.redo: be used for hard-stop broker After, btree Index reconstruction
Compared with AMQ, the storage is smaller, and all target messages are stored together.
Set the log file size and other attributes. Can be used and configured when necessary
If the size is exceeded, it will not be overwritten, but adopted_ 1 suffix to create a new file
JDBC

- Introduce the mysql driver jar and put it in the lib directory of activemq
- Configure activemq.xml
<broker ...> ... <persistenceAdapter> <!-- <kahaDB directory="${activemq.data}/kahadb"/> --> <jdbcPersistenceAdapter dataSource="#mysql-ds" /> </persistenceAdapter> ... </broker> <!-- MySql DataSource Sample Setup --> <bean id="mysql-ds" class="org.apache.commons.dbcp2.BasicDataSource" destroy-method="close"> <property name="driverClassName" value="com.mysql.jdbc.Driver"/> <!-- [[note] be sure to bring parameters relaxAutoCommit=true --> <property name="url" value="jdbc:mysql://localhost/activemq? relaxAutoCommit=true"/> <property name="username" value="activemq"/> <property name="password" value="activemq"/> <property name="poolPreparedStatements" value="true"/> </bean>
Several tables are created in the database to store logs
The lock table stores distributed locks. msg stores messages.
Multi(m) kahaDB Persistence Adapter
to configure
Each instance of kahaDB can be configured independently. If the destination filteredKahaDB is not provided to the server, the implicit default will match any destination, queue, or topic. This is a convenient catch. If no matching persistence adapter is found, target creation will fail with one exception. filteredKahaDB shares its wildcard matching rule policy with each target. In ActiveMQ 5.15, filteredKahaDB supports a StoreUsage attribute named usage. This allows a single disk limit to be imposed on the matching queue.
<broker brokerName="broker"> <persistenceAdapter> <mKahaDB directory="${activemq.base}/data/kahadb"> <filteredPersistenceAdapters> <!-- match all queues --> <filteredKahaDB queue=">"> <usage> <storeUsage limit="1g" /> </usage> <persistenceAdapter> <kahaDB journalMaxFileLength="32mb"/> </persistenceAdapter> </filteredKahaDB> <!-- match all destinations --> <filteredKahaDB> <persistenceAdapter> <kahaDB enableJournalDiskSyncs="false"/> </persistenceAdapter> </filteredKahaDB> </filteredPersistenceAdapters> </mKahaDB> </persistenceAdapter> </broker>
Make a general match
Automatic per target persistence adapter
Set perDestination = "true" on catch all, that is, when no explicit destination is set, the filteredKahaDB entry is displayed. Each matching destination will be assigned its own instance.
Do wildcard form
<broker brokerName="broker"> <persistenceAdapter> <mKahaDB directory="${activemq.base}/data/kahadb"> <filteredPersistenceAdapters> <!-- kahaDB per destinations --> <filteredKahaDB perDestination="true"> <persistenceAdapter> <kahaDB journalMaxFileLength="32mb"/> </persistenceAdapter> </filteredKahaDB> </filteredPersistenceAdapters> </mKahaDB> </persistenceAdapter> </broker>
agreement
Official documents:

Protocol configuration mode
In the default configuration file, a large number of protocols are enabled for us. Different protocols are selected according to different port numbers. You can change it according to yourself.
<!--The transport connectors expose ActiveMQ over a given protocol to clients and other brokers. For more information, see: http://activemq.apache.org/configuring-transports.html --> <transportConnectors> <!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB --> <transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/> <transportConnector name="amqp" uri="amqp://0.0.0.0:5672? maximumConnections=1000&wireFormat.maxFrameSize=104857600"/> <transportConnector name="stomp" uri="stomp://0.0.0.0:61613? maximumConnections=1000&wireFormat.maxFrameSize=104857600"/> <transportConnector name="mqtt" uri="mqtt://0.0.0.0:1883? maximumConnections=1000&wireFormat.maxFrameSize=104857600"/> <transportConnector name="ws" uri="ws://0.0.0.0:61614? maximumConnections=1000&wireFormat.maxFrameSize=104857600"/> </transportConnectors>
Basically, you can keep the default or modify it as needed
OpenWire
By way of byte array
AMQP
Server configuration description
<dependency> <groupId>org.apache.qpid</groupId> <artifactId>qpid-jms-client</artifactId> <version>0.37.0</version> </dependency>
// 1. Create a connection factory JmsConnectionFactory connectionFactory = new JmsConnectionFactory(null, null, brokerUrl);
Then other ways of use do not need to be changed
// 1. Create connection factory connectionFactory = new JmsConnectionFactory(null, null, brokerUrl); // 2. Create connection conn = connectionFactory.createConnection(); conn.start(); // Be sure to start // 3. Create a session (you can create one or more sessions) session = conn.createSession(false, Session.AUTO_ACKNOWLEDGE); // 4. Create a message sending destination (Topic or Queue) Destination destination = session.createQueue(destinationUrl); ........
Support nio part
Here we can see that ActiveMQ is implemented based on AMQP version 1.0, while rabbitMQ is implemented based on version 0.9.
MQTT
Client usage:
- Introducing mqtt client jar
<dependency> <groupId>org.fusesource.mqtt-client</groupId> <artifactId>mqtt-client</artifactId> <version>1.15</version> </dependency>
- Writing client code This code compares the underlying code
public static void main(String[] args) throws Exception { MQTT mqtt = new MQTT(); mqtt.setHost("localhost", 1883); // mqtt.setUserName(user); // mqtt.setPassword(password); FutureConnection connection = mqtt.futureConnection(); connection.connect().await(); UTF8Buffer topic = new UTF8Buffer("foo/blah/bar"); Buffer msg = new AsciiBuffer("mqtt message"); Future<?> f = connection.publish(topic, msg, QoS.AT_LEAST_ONCE, false); f.await(); connection.disconnect().await(); System.exit(0); }
Server
AUTO
security
authentication
Simple authentication
<plugins> <!-- Configure authentication; Username, passwords and groups --> <simpleAuthenticationPlugin> <users> <authenticationUser username="system" password="manager" groups="users,admins"/> <authenticationUser username="user" password="password" groups="users"/> <authenticationUser username="guest" password="password" groups="guests"/> </users> </simpleAuthenticationPlugin> </plugins>
ConnectionFactory connectionFactory = new ActiveMQConnectionFactory(username,password,url);
<simpleAuthenticationPlugin anonymousAccessAllowed="true">.....

- Configure the JAAS plug-in in the broker of activemq.xml.
<plugins> <!--use JAAS to authenticate using the login.config file on the classpath to configure JAAS --> <jaasAuthenticationPlugin configuration="activemq" /> </plugins>
activemq { org.apache.activemq.jaas.PropertiesLoginModule required org.apache.activemq.jaas.properties.user="users.properties" org.apache.activemq.jaas.properties.group="groups.properties"; };
- Configure users in conf/users.properties
#User name = password admin=admin user=password guest=guest
- Configure user groups (roles) in conf/groups.properties
#Group name = user 1, user 2 group name custom admins=admin users=admin,user guests=guest
Permission control

You can add any of these in the configuration
ActiveMQ high availability cluster
Master Slave
ActiveMQ provides a master-slave cluster mechanism to achieve high availability.
The master and slave are selected through distributed locks. There is a common data implementation, and the intermediate data cannot be killed
- Multiple Broker instances share storage
- Multiple brokers become masters by grabbing exclusive locks
- The Master node provides external services, and the Slave node suspends waiting for the exclusive lock
- If the Master node fails, non persistent messages will be lost, so persistent messages are generally used.
- Clients connect in a multi Broker failover mode
failover:(tcp://broker1:61616,tcp://broker2:61616,tcp://broker3:61616)

Broke1 restart
- The Shared File System Master Slave exclusive lock is the lock file in the shared storage directory
- The JDBC Master Slave exclusive lock is activemq_lock table record
As mentioned earlier, there is a lock under the KahaDB directory in the log record. This lock is implemented by the distributed lock.
In a large number of high concurrency scenarios, this cluster still can't work. No load balancing. Not suitable.
Distributed queues and topics
Distributed queue And topics use this method to solve the problem of high concurrency
Starting with 1.1, ActiveMQ supports proxy networks, which enables us to support distributed queues and topics across proxy networks. This allows the client to connect to any agent in the network and fail over to another agent if necessary. There is a failure - provide an HA agent cluster from the client's point of view
- Independent brokers are connected to each other;
- The client uses Failover to connect to any Broker;
- The message produced by the client is sent to the broker to which it is connected and stored on the broker;
- The consumer client can connect to any Broker to consume the messages of the target.
Distributed queues in store / forward
When we publish A message on the queue, it is stored in the publisher that is communicating. Then, if the agent is configured to store / forward to other agents, for clients, the agent will send it to one of these clients (which can be A node or agent), depending on the scheduling algorithm). This dispatch algorithm will continue until the message is deleted and finally sent and used by the client. At any point in time, messages will only exist in the storage of one agent before they are consumed. Note that messages are distributed to other agents only if there are consumers on them. e. g. if there are agents A, B, C and publishers in the queue on A. If there are consumers in the queue on A and B, the messages in the queue will be distributed on agents A and B; Some messages will be sent to B, some messages will be used on A, and no messages will be sent to C. If there are consumers in the queue, start with C, and then the message will flow there. If the consumer stops, no more messages are sent to C.
<broker brokerName="receiver" persistent="false" useJmx="false"> <networkConnectors> <!-- Configure additional to network connection Broker --> <networkConnector uri="static: (tcp://host1:61616,tcp://host2:61616,tcp://..)"/> </networkConnectors> ... </broker>
uri="static:(tcp://host1:61616,tcp://host2:61616)?maxReconnectDelay=5000&useExponentialBackOff=false"
multicast dynamic discovery method:
<networkConnectors> <networkConnector uri="multicast://default"/> </networkConnectors>
Enable multicast for all broker s:
Client connection:
failover:(tcp://broker1:61616,tcp://broker2:61616,tcp://broker3:61616)
Insufficient Broker network connectivity: lack of high availability
Networks + Master-Slave
High availability can only be achieved by combining the two.
networkConnectors configuration:
Client connection:
failover:(tcp://broker1:61616,tcp://broker2:61616,tcp://broker3:61616,...)? randomize=true