RocketMQ NameServer summary and core source code analysis

I. Introduction to NameServer

NameServer is a lightweight name service specially designed for Rocketmq. It is simple, scalable, stateless, and nodes do not communicate with each other. The working principle of the entire Rocketmq cluster is shown in the following figure:

It can be seen that the RocketMQ architecture is mainly divided into four parts: Broker, Producer, Consumer and NameServer. The other three will communicate with NameServer:

  • NameServer: a simple Topic routing registry. Its role is similar to zookeeper in Dubbo. It supports dynamic registration and discovery of brokers.

        It mainly includes two functions:

  1. Broker management: NameServer accepts the registration request of the broker and processes the request data as the basic data of routing information. Carry out heartbeat detection mechanism on the broker to detect whether it is still alive (120s);

  2. topic routing information management: each NameServer stores the routing information of the entire Broker cluster, which is used for the routing information queried by Producer and conductor, so as to deliver and consume messages.

  • Producer: the role of message publishing, which can be deployed in a cluster. Get the routing information of the Topic through the NameServer cluster, including which queues are under the Topic and which brokers these queues are distributed on. (producer only sends messages to the Master node, so it only needs to establish a connection with the Master node).

  • Consumer: the role of message consumption, which can be deployed in a cluster. Get the routing information of the Topic through the NameServer cluster and connect to the corresponding Broker to pull and consume messages. (both Master and Slave can pull messages, so the consumer will establish a connection with both Master and Slave).

  • Broker: it is mainly responsible for message storage, delivery and query, as well as service high availability assurance.

2, Why use NameServer?

At present, there are many components that can be used as service discovery components, such as etcd, consul t, zookeeper, nacos, etc

So why did rocketmq choose to develop a NameServer instead of using these open source components? The reasons are as follows:

  • The architecture design of RocketMQ determines that only one lightweight metadata server is needed to maintain the final consistency, instead of Zookeeper's strong consistency solution, and there is no need to rely on another middleware, so as to reduce the overall maintenance cost.

  • Nameservers are independent of each other and have no communication relationship with each other. Since the Broker registers its own routing information with each NameServer, each NameServer saves a complete routing information. If a single NameServer hangs up, the Broker can still synchronize the routing information with other nameservers without affecting other nameservers, The consumer can still dynamically perceive the routing information of the Broker.

3, NameServer internal decryption

The routing data source of NameServer is provided by broker registration, and then processed internally. The users of routing data are producer and consumer. Next, we will focus on analyzing the routing data structure of NameServer, routing registration / query, broker dynamics and other detection core logic (source code).

3.1 routing data structure

RouteInfoManager is the core logic class of NameServer. Its code function is to maintain routing information management and provide core functions such as routing registration / query. Since routing information is stored in NameServer application memory, its essence is to maintain HashMap. In order to prevent concurrent operations, ReentrantReadWriteLock read-write lock is added. The simple code description is as follows:

​public class RouteInfoManager {
    private static final InternalLogger log =         InternalLoggerFactory.getLogger(LoggerName.NAMESRV_LOGGER_NAME);
    // The idle time between the Nameserver and the Broker is 2 minutes by default. If the Nameserver does not receive the Broker's heartbeat packet within 2 minutes, the connection will be closed.
    private final static long BROKER_CHANNEL_EXPIRED_TIME = 1000 * 60 * 2;
    //Read write lock
    private final ReadWriteLock lock = new ReentrantReadWriteLock();
    // Topic and the corresponding queue information - load balancing is performed according to the routing table when sending messages.
    private final HashMap<String/* topic */, List<QueueData>> topicQueueTable;
    // The collection of brokers in the unit of Broker name (the basic information of the Broker, including the Broker name, the cluster name, and the address of the active and standby brokers.)
    private final HashMap<String/* brokerName */, BrokerData> brokerAddrTable;
    // Cluster and the list of brokers belonging to the cluster (obtain the list of corresponding group of Broker names according to a cluster name)
    private final HashMap<String/* clusterName */, Set<String/* brokerName */>> clusterAddrTable;
    // List of surviving Broker addresses (NameServer will replace this information every time it receives a heartbeat packet)
    private final HashMap<String/* brokerAddr */, BrokerLiveInfo> brokerLiveTable;
    // Filter Server list corresponding to the Broker - used by the consumer to pull messages
    private final HashMap<String/* brokerAddr */, List<String>/* Filter Server */> filterServerTable;
    ...ellipsis...
}

You can see the relationship more clearly through the following class diagram:

QueueData property resolution:

/**
 * Queue information
 */
public class QueueData implements Comparable<QueueData> {
    // The name of the Broker to which the queue belongs
    private String brokerName;
    // Number of read queues default: 16
    private int readQueueNums;
    // Number of write queues default: 16
    private int writeQueueNums;
    //Read / write permission of todo Topic (2 is write, 4 is read, and 6 is read / write)
    private int perm;
    /** Synchronous replication or asynchronous replication -- corresponding to TopicConfig.topicSysFlag
     * {@link org.apache.rocketmq.common.sysflag.TopicSysFlag}
     */
    private int topicSynFlag;
        ...ellipsis...
 }
map: topicQueueTable data format demo(json): 
{
    "TopicTest":[
        {
            "brokerName":"broker-a",
            "perm":6,
            "readQueueNums":4,
            "topicSynFlag":0,
            "writeQueueNums":4
        }
    ]
}

BrokerData property resolution:

/**
 * broker Data: the corresponding relationship between Master and Slave is defined by specifying the same BrokerName and different brokerids. BrokerId 0 indicates Master and non-0 indicates Slave.
 */
public class BrokerData implements Comparable<BrokerData> {
    // broker cluster
    private String cluster;
    // brokerName
    private String brokerName;
    // There can be one Master and multiple Slave under the same brokerName, so brokerAddrs is a collection
    // brokerld=O means Master, and greater than O means Slave
    private HashMap<Long/* brokerId */, String/* broker address */> brokerAddrs;
    // Used to find the broker address
    private final Random random = new Random();
    ...ellipsis...
 }
map: brokerAddrTable data format demo(json): 
{
    "broker-a":{
        "brokerAddrs":{
            "0":"172.16.62.75:10911"
        },
        "brokerName":"broker-a",
        "cluster":"DefaultCluster"
    }
}

BrokerLiveInfo property resolution:

/**
 *  Store the information of surviving brokers. The information of currently surviving brokers is not real-time. NameServer scans all brokers every 10S to know the status of brokers according to the time of heartbeat packet,
 *  This mechanism also leads to that when a Broker process is suspended, the message producer cannot immediately perceive it and may continue to send messages to it, resulting in failure (non high availability)
 */
class BrokerLiveInfo {
    //Last update time
    private long lastUpdateTimestamp;
    //Version number information
    private DataVersion dataVersion;
    //Netty's Channel
    private Channel channel;
    //The address of HA Broker is the linked address when Slave pulls data from the Master, which is composed of brokerIp2+HA port
    private String haServerAddr;
    ...ellipsis...
 }
 map: brokerLiveTable data format demo(json): 
 {
    "172.16.62.75:10911":{
        "channel":{
            "active":true,
            "inputShutdown":false,
            "open":true,
            "outputShutdown":false,
            "registered":true,
            "writable":true
        },
        "dataVersion":{
            "counter":2,
            "timestamp":1630907813571
        },
        "haServerAddr":"172.16.62.75:10912",
        "lastUpdateTimestamp":1630907814074
    }
}

brokerAddrTable -Map data format demo(json)

{"DefaultCluster":["broker-a"]}

From the HashMap data structure maintained by RouteInfoManager and the attributes of QueueData, BrokerData and BrokerLiveInfo, the information maintained by NameServer is simple but extremely important.

3.2 route registration

There are several situations in which roker actively registers routing information:

  1. Register with all nameservers in the cluster at startup  

  2. Send heartbeat packet registration to all nameservers in the cluster at a fixed time of 30s

  3. When the topic information in the broker is changed (add / modify / delete), the heartbeat package is sent for registration.

However, for NameServer, the core processing logic method is RouteInfoManager#registerBroker. The source code analysis is as follows:

RouteInfoManager#registerBroker
public RegisterBrokerResult registerBroker(
    final String clusterName, final String brokerAddr,
    final String brokerName, final long brokerId,
    final String haServerAddr,
    //TopicConfigSerializeWrapper is a complex data structure that mainly contains all topic information on the broker
    final TopicConfigSerializeWrapper topicConfigWrapper,
    final List<String> filterServerList, final Channel channel) {
    RegisterBrokerResult result = new RegisterBrokerResult();
    try {
        try {
            this.lock.writeLock().lockInterruptibly(); // lock
            //1: Maintain clusterAddrTable data here
            Set<String> brokerNames = this.clusterAddrTable.get(clusterName);
            if (null == brokerNames) {
                brokerNames = new HashSet<String>();
                this.clusterAddrTable.put(clusterName, brokerNames);
            }
            brokerNames.add(brokerName);
            //2: The brokerAddrTable data is maintained here
            boolean registerFirst = false;//Whether to register for the first time (if the Topic configuration information changes or the broker is registered for the first time)
            BrokerData brokerData = this.brokerAddrTable.get(brokerName);
            if (null == brokerData) {
                registerFirst = true;
                brokerData = new BrokerData(clusterName, brokerName, new HashMap<Long, String>());
                this.brokerAddrTable.put(brokerName, brokerData);
            }
            //3: The topicqueueuetable data is maintained here. The data update operation method is: createAndUpdateQueueData
            String oldAddr = brokerData.getBrokerAddrs().put(brokerId, brokerAddr);
            registerFirst = registerFirst || (null == oldAddr);
            if (null != topicConfigWrapper
                && MixAll.MASTER_ID == brokerId) { //Little knowledge: only the master node requests are processed, because the topic information of the standby node is synchronized with the master node
                // If the Topic configuration information changes or the broker is registered for the first time
                if (this.isBrokerTopicConfigChanged(brokerAddr, topicConfigWrapper.getDataVersion())
                    || registerFirst) {
                    ConcurrentMap<String, TopicConfig> tcTable = topicConfigWrapper.getTopicConfigTable();
                    if (tcTable != null) {
                        for (Map.Entry<String, TopicConfig> entry : tcTable.entrySet()) {                      
                            this.createAndUpdateQueueData(brokerName, entry.getValue());
                        }
                    }
                }
            }
             //4: Maintain here: brokerLiveTable data, key point: the first parameter of BrokerLiveInfo constructor: System.currentTimeMillis(), which is used for survival judgment
            BrokerLiveInfo prevBrokerLiveInfo = this.brokerLiveTable.put(brokerAddr,
                new BrokerLiveInfo(
                    System.currentTimeMillis(),
                    topicConfigWrapper.getDataVersion(),
                    channel,
                    haServerAddr));
            if (null == prevBrokerLiveInfo) {
                log.info("new broker registered, {} HAServer: {}", brokerAddr, haServerAddr);
            }
            //5 - maintenance: filterServerTable data
            if (filterServerList != null) {
                if (filterServerList.isEmpty()) {
                    this.filterServerTable.remove(brokerAddr);
                } else {
                    this.filterServerTable.put(brokerAddr, filterServerList);
                }
            }
            //Return value (if the current broker is a slave node), set the information such as haServerAddr and masterAddr into the result return value
            if (MixAll.MASTER_ID != brokerId) {
                String masterAddr = brokerData.getBrokerAddrs().get(MixAll.MASTER_ID);
                if (masterAddr != null) {
                    BrokerLiveInfo brokerLiveInfo = this.brokerLiveTable.get(masterAddr);
                    if (brokerLiveInfo != null) {
                        result.setHaServerAddr(brokerLiveInfo.getHaServerAddr());
                        result.setMasterAddr(masterAddr);
                    }
                }
            }
        } finally {
            this.lock.writeLock().unlock();
        }
    } catch (Exception e) {
        log.error("registerBroker Exception", e);
    }
    return result;
}

remarks:

createAndUpdateQueueData method: in fact, it is to maintain the data of topicqueueuetable. If you are careful, you will certainly do it.

It can be seen from the source code that the routing information registered by the Broker for NameServer is actually to maintain clusterAddrTable, brokerAddrTable, topicQueueTable, brokerLiveTable and filterServerTable. In fact, the source code is so simple.

three point three   Route deletion

There are two ways to delete routes: one is to report the deletion actively by the broker, and the other is to delete actively by the NameServer. Although the processing logic is somewhat different for NameServer, you can understand it at first glance. The analysis is as follows:

1. Broker actively reports deletion: when the broker is normally closed, it will execute the unregisterBroker instruction and send a deregistration request to the NameServer. Its core source code is as follows:

RouteInfoManager#unregisterBroker
​
public void unregisterBroker(
    final String clusterName, final String brokerAddr,
    final String brokerName, final long brokerId) {
    try {
        try {
            this.lock.writeLock().lockInterruptibly();
            //1 - directly delete the brokerLiveTable information without judging the time
            BrokerLiveInfo brokerLiveInfo = this.brokerLiveTable.remove(brokerAddr);
            log.info("unregisterBroker, remove from brokerLiveTable {}, {}",
                brokerLiveInfo != null ? "OK" : "Failed",
                brokerAddr
            );
            //2 - delete filterServerTable information
            this.filterServerTable.remove(brokerAddr);
            //3 - maintain and delete brokerAddrTable information
            boolean removeBrokerName = false;
            BrokerData brokerData = this.brokerAddrTable.get(brokerName);
            if (null != brokerData) {
                String addr = brokerData.getBrokerAddrs().remove(brokerId);
                log.info("unregisterBroker, remove addr from brokerAddrTable {}, {}",
                    addr != null ? "OK" : "Failed",
                    brokerAddr
                );
                if (brokerData.getBrokerAddrs().isEmpty()) {
                    this.brokerAddrTable.remove(brokerName);
                    log.info("unregisterBroker, remove name from brokerAddrTable OK, {}",
                        brokerName
                    );
                    removeBrokerName = true;
                }
            }
            //4 - maintain and delete clusterAddrTable information
            if (removeBrokerName) {
                Set<String> nameSet = this.clusterAddrTable.get(clusterName);
                if (nameSet != null) {
                    boolean removed = nameSet.remove(brokerName);
                    log.info("unregisterBroker, remove name from clusterAddrTable {}, {}",
                        removed ? "OK" : "Failed",
                        brokerName);
​
                    if (nameSet.isEmpty()) {
                        this.clusterAddrTable.remove(clusterName);
                        log.info("unregisterBroker, remove cluster from clusterAddrTable {}",
                            clusterName
                        );
                    }
                }
                //5 - maintain and delete topicQueueTable information
                this.removeTopicByBrokerName(brokerName);
            }
        } finally {
            this.lock.writeLock().unlock();
        }
    } catch (Exception e) {
        log.error("unregisterBroker Exception", e);
    }
}

remarks:

removeTopicByBrokerName method: in fact, it is to delete the data of topicqueueuetable. Carefully, you will delete it.

2. NameServer active deletion: NameServer scans the brokerLiveTable regularly (10s) to detect the time difference between the last heartbeat packet and the current system time. If the timestamp is greater than 120s, the Broker information needs to be removed. Its core source code is as follows:

RouteInfoManager#scanNotActiveBroker​
public void scanNotActiveBroker() {
    Iterator<Entry<String, BrokerLiveInfo>> it = this.brokerLiveTable.entrySet().iterator();
    while (it.hasNext()) {
        Entry<String, BrokerLiveInfo> next = it.next();
        long last = next.getValue().getLastUpdateTimestamp();
        //1- BROKER_CHANNEL_EXPIRED_TIME, default (1000 * 60 * 2) 120s, judge whether it exceeds 120s
        if ((last + BROKER_CHANNEL_EXPIRED_TIME) < System.currentTimeMillis()) {
            RemotingUtil.closeChannel(next.getValue().getChannel());
            it.remove();
            log.warn("The broker channel expired, {} {}ms", next.getKey(), BROKER_CHANNEL_EXPIRED_TIME);
            this.onChannelDestroy(next.getKey(), next.getValue().getChannel());
        }
    }
}
public void onChannelDestroy(String remoteAddr, Channel channel) {
    String brokerAddrFound = null;
    if (channel != null) {
        try {
            try {  //1 - query the broker information to be deleted
                this.lock.readLock().lockInterruptibly();
                Iterator<Entry<String, BrokerLiveInfo>> itBrokerLiveTable =
                    this.brokerLiveTable.entrySet().iterator();
                while (itBrokerLiveTable.hasNext()) {
                    Entry<String, BrokerLiveInfo> entry = itBrokerLiveTable.next();
                    if (entry.getValue().getChannel() == channel) {
                        brokerAddrFound = entry.getKey();
                        break;
                    }
                }
            } finally {
                this.lock.readLock().unlock();
            }
        } catch (Exception e) {
            log.error("onChannelDestroy Exception", e);
        }
    }
    if (null == brokerAddrFound) {
        brokerAddrFound = remoteAddr;
    } else {
        log.info("the broker's channel destroyed, {}, clean it's data structure at once", brokerAddrFound);
    }
    if (brokerAddrFound != null && brokerAddrFound.length() > 0) {
        try {
            try {
                this.lock.writeLock().lockInterruptibly();
                this.brokerLiveTable.remove(brokerAddrFound); //2 - maintain and delete brokerLiveTable information
                this.filterServerTable.remove(brokerAddrFound); //3 - maintain and delete filterServerTable information
                String brokerNameFound = null;
                boolean removeBrokerName = false;
                Iterator<Entry<String, BrokerData>> itBrokerAddrTable =
                    this.brokerAddrTable.entrySet().iterator(); //4 - maintain and delete brokerAddrTable information
                while (itBrokerAddrTable.hasNext() && (null == brokerNameFound)) {
                    BrokerData brokerData = itBrokerAddrTable.next().getValue();
                    Iterator<Entry<Long, String>> it = brokerData.getBrokerAddrs().entrySet().iterator();
                    while (it.hasNext()) {
                        Entry<Long, String> entry = it.next();
                        Long brokerId = entry.getKey();
                        String brokerAddr = entry.getValue();
                        if (brokerAddr.equals(brokerAddrFound)) {
                            brokerNameFound = brokerData.getBrokerName();
                            it.remove();
                            log.info("remove brokerAddr[{}, {}] from brokerAddrTable, because channel destroyed",
                                brokerId, brokerAddr);
                            break;
                        }
                    }
                    if (brokerData.getBrokerAddrs().isEmpty()) {
                        removeBrokerName = true;
                        itBrokerAddrTable.remove();
                        log.info("remove brokerName[{}] from brokerAddrTable, because channel destroyed",
                            brokerData.getBrokerName());
                    }
                }
                if (brokerNameFound != null && removeBrokerName) {
                    Iterator<Entry<String, Set<String>>> it = this.clusterAddrTable.entrySet().iterator(); // 5 - maintain and delete clusterAddrTable information
                    while (it.hasNext()) {
                        Entry<String, Set<String>> entry = it.next();
                        String clusterName = entry.getKey();
                        Set<String> brokerNames = entry.getValue();
                        boolean removed = brokerNames.remove(brokerNameFound);
                        if (removed) {
                            log.info("remove brokerName[{}], clusterName[{}] from clusterAddrTable, because channel destroyed",
                                brokerNameFound, clusterName);
​
                            if (brokerNames.isEmpty()) {
                                log.info("remove the clusterName[{}] from clusterAddrTable, because channel destroyed and no broker in this cluster",
                                    clusterName);
                                it.remove();
                            }
​
                            break;
                        }
                    }
                }
                if (removeBrokerName) {
                    Iterator<Entry<String, List<QueueData>>> itTopicQueueTable =
                        this.topicQueueTable.entrySet().iterator(); // 6 - maintenance and deletion: topicqueueuetable information
                    while (itTopicQueueTable.hasNext()) {
                        Entry<String, List<QueueData>> entry = itTopicQueueTable.next();
                        String topic = entry.getKey();
                        List<QueueData> queueDataList = entry.getValue();
​
                        Iterator<QueueData> itQueueData = queueDataList.iterator();
                        while (itQueueData.hasNext()) {
                            QueueData queueData = itQueueData.next();
                            if (queueData.getBrokerName().equals(brokerNameFound)) {
                                itQueueData.remove();
                                log.info("remove topic[{} {}], from topicQueueTable, because channel destroyed",
                                    topic, queueData);
                            }
                        }
                        if (queueDataList.isEmpty()) {
                            itTopicQueueTable.remove();
                            log.info("remove topic[{}] all queue, from topicQueueTable, because channel destroyed",
                                topic);
                        }
                    }
                }
            } finally {
                this.lock.writeLock().unlock();
            }
        } catch (Exception e) {
            log.error("onChannelDestroy Exception", e);
        }
    }
}

It can be seen from the source code that the two methods of Broker deregistration for NameServer are actually to delete the relevant information of clusterAddrTable, brokerAddrTable, topicQueueTable, brokerLiveTable and filterServerTable

three point four   Route discovery

RocketMQ route discovery is actually non real-time. When the Topic route changes, NameServer will not actively push it to the client, but regularly pull the latest Topic route from the production end and consumer end. Its core source code is as follows:

RouteInfoManager#pickupTopicRouteData
public TopicRouteData pickupTopicRouteData(final String topic) {
    TopicRouteData topicRouteData = new TopicRouteData();
    boolean foundQueueData = false;
    boolean foundBrokerData = false;
    Set<String> brokerNameSet = new HashSet<String>();
    List<BrokerData> brokerDataList = new LinkedList<BrokerData>();
    topicRouteData.setBrokerDatas(brokerDataList);
    HashMap<String, List<String>> filterServerMap = new HashMap<String, List<String>>();
    topicRouteData.setFilterServerTable(filterServerMap);
    try {
        try {
            this.lock.readLock().lockInterruptibly();
            List<QueueData> queueDataList = this.topicQueueTable.get(topic);
            if (queueDataList != null) {
                topicRouteData.setQueueDatas(queueDataList);
                foundQueueData = true;
                Iterator<QueueData> it = queueDataList.iterator();
                while (it.hasNext()) {
                    QueueData qd = it.next();
                    brokerNameSet.add(qd.getBrokerName());
                }
                // Processing and building: BrokerData data
                for (String brokerName : brokerNameSet) {
                    BrokerData brokerData = this.brokerAddrTable.get(brokerName);
                    if (null != brokerData) {
                        BrokerData brokerDataClone = new BrokerData(brokerData.getCluster(), brokerData.getBrokerName(), (HashMap<Long, String>) brokerData
                            .getBrokerAddrs().clone());
                        brokerDataList.add(brokerDataClone);
                        foundBrokerData = true;
                        for (final String brokerAddr : brokerDataClone.getBrokerAddrs().values()) {
                            List<String> filterServerList = this.filterServerTable.get(brokerAddr);
                            filterServerMap.put(brokerAddr, filterServerList);
                        }
                    }
                }
            }
        } finally {
            this.lock.readLock().unlock();
        }
    } catch (Exception e) {
        log.error("pickupTopicRouteData Exception", e);
    }
    log.debug("pickupTopicRouteData {} {}", topic, topicRouteData);
    if (foundBrokerData && foundQueueData) {
        return topicRouteData;
    }
    return null;
}

remarks:

I'm sorry to write comments on this code. In fact, it queries data from maps such as topicqueutable, brokerAddrTable and filterServerTable, assembles it to TopicRouteData, and then returns it to the client for use.

The following is a list of properties under the brief introduction of TopicRouteData. You will find that it can be so simple:

public class TopicRouteData extends RemotingSerializable {
    //The configuration of topic sorting is related to the NameSpace "ORDER_TOPIC_CONFIG". Refer to DefaultRequestProcessor#getRouteInfoByTopic for details
    private String orderTopicConf;
    // topic queue metadata
    private List<QueueData> queueDatas;
    // topic distributed broker metadata
    private List<BrokerData> brokerDatas;
    // Filter server address list on broker
    private HashMap<String/* brokerAddr */, List<String>/* Filter Server */> filterServerTable;
    ...ellipsis...
}

In fact, NameServer provides many other functions and methods, such as:

Getbrokerclusterinfo (get cluster information),

Getalltopiclistfromnameserver (get all topics), etc,

But most of it revolves around

clusterAddrTable

brokerAddrTable

topicQueueTable

brokerLiveTable

filterServerTable

Here are some hashmaps.

4, Conclusion

Summary from the perspective of function: as the "brain" of RocketMQ, NameServer stores the routing information of cluster MQ, specifically recording and maintaining the information of Topic and Broker, monitoring the running status of Broker, and providing routing capability for clients; To sum up from the perspective of source code: NameServer maintains multiple hashmaps, Broker registrations, and client queries around its Map operations. Of course, reentrantreadwritelock (read-write lock) is added to solve the concurrency problem. In fact, this section only describes some key codes of NameServer, and many source codes such as the startup process of NameServer are worth analyzing and learning, Finally, let's review "NameServer" again through a summary diagram!

5, Question

Carefully, have you found the following defects in NameServer:    

Suppose the Broker goes down abnormally, and the NameServer waits at least 120s to remove the Broker from the routing information. During the Broker failure, the routing information obtained by the message Producer according to the topic contains the Broker that has gone down, which will lead to the failure of message sending in a short time. What should I do? Isn't messaging highly available? Is there any impact on consumer messages?   

Please take the problem analysis with you and will explain it to you one by one at the sender and consumer

Article navigation

categorytitlerelease
RedisRedis (I): why can a single thread be so fastPublished
Redis (II): memory model and recycling algorithmPublished
Redis (III): persistencePublished
Redis (IV): master slave synchronizationComing online soon
Redis (V): Cluster ConstructionComing online soon
Redis (VI): practical applicationComing online soon
ElasticsearchElasticsearch: OverviewPublished
Elasticsearch: CorePublished
Elasticsearch: actual combatPublished
Detailed description of Elasticsearch writing processComing online soon
Detailed explanation of Elasticsearch query processComing online soon
Elasticsearch cluster consistencyComing online soon
Basic concepts of LuceneComing online soon
Elasticsearch deployment architecture and capacity planningComing online soon
RocketMQRocketMQ NameServer summary and core source code analysisThis article
RocketMQ - drill down on ProducerComing online soon
Broker - start process source code decryptionComing online soon
Broker - accept message processing flow decryptionComing online soon
Tomcat source code analysisTomcat (I): project structure and architecture analysisComing online soon
Tomcat (II): startup and shutdown process analysisComing online soon
Tomcat (III): analysis of application loading principleComing online soon
Tomcat (IV): analysis of network request principleComing online soon
Tomcat (V): embedded and performance tuningComing online soon
NacosNacos project structure and architecture analysisComing online soon
Source code analysis of Nacos service registrationComing online soon
Nacos configuration management source code analysisComing online soon
Analysis of optimization function of Nacos2.0Coming online soon
NettyComputer network & nio core principlesComing online soon
Explain in detail how kafka encapsulates network communication components based on native nio?Coming online soon
Introduction to the core components of nettyComing online soon
Source code analysis netty server, how to pull up the port? How to deal with the new connection?Coming online soon
How do various events in netty propagate in pipeline?Coming online soon
What is unpacking and sticking in network programming? How are kafka and netty solved?Coming online soon
How does netty's most complex cache allocation work?Coming online soon
Source code analysis: implementation methods and details of different time rounds in netty, kafka and sentinelComing online soon
Try to compare the performance optimization in kafka & netty from the perspective of God, and enrich the application of various design patternsComing online soon
Netty's practice in rocketmq and interpretation of rocketmq's message protocolComing online soon

​​

Pay attention to IT peak technology, private letter author, and obtain the following PDF materials of 2021 global architect summit.

Tags: Java Distribution Middleware message queue RocketMQ

Posted on Sun, 31 Oct 2021 19:43:07 -0400 by prawn_86