High frequency interview point: distributed transaction, comprehensive analysis of theory + Practice


Distributed transaction has always been a headache, but it is also a high-frequency interview and examination center. Many students are planted on it and miss the offer. This paper will take you to comprehensively analyze distributed transaction from shallow to deep. Theory + practice will help you fully master distributed transaction and hang up the interviewer!

Through this article, you can master the following contents:

  • Understand what distributed transactions are and why they occur

  • Master several distributed transaction solutions: XA, TCC, message transaction and AT

  • Master the advantages, disadvantages and usage scenarios of various solutions for distributed transactions

  • Learn to use Seata to solve distributed transactions

Now let's get to the point!

What is distributed transaction

To understand distributed transactions, you must first understand local transactions.

1 local transactions

Local transactions, that is, traditional stand-alone database transactions, must have ACID principles:

1.1 atomicity (A)

The so-called atomicity means that all operations in the whole transaction are a whole, either all completed or none done, and there is no intermediate state. If an error occurs during the execution of the transaction, all operations will be rolled back, and the whole transaction will be as if it had never been executed.

1.2 consistency (C)

Consistency means that the data integrity is not destroyed from the beginning of the transaction to the end of the transaction. For example, A has 100 yuan and B has 100 yuan. If A successfully transfers to B50 yuan in A transaction, no matter what happens, the sum of the data of A and B must be 200 yuan, otherwise the consistency of the transaction will be destroyed. For example, A is 50 yuan less, but B does not increase 50 yuan, Obviously, this is unacceptable!

1.3 isolation (I)

Isolation means that transactions will not affect each other, and the intermediate state of a transaction will not be perceived by other transactions. Database isolation includes four different isolation levels:

Read Uncommitted

Read Committed

Repeatable Read


1.4 persistence (D)

As long as the transaction is committed, the changes made by the transaction to the data are completely saved in the database. Even in case of power failure and system downtime, this is persistence.

In traditional projects, the system is basically a single point of deployment: a single server and a single database. In this case, the transaction mechanism of the database itself can ensure the ACID principle. Such a transaction is a local transaction.

Take mysql as an example of local transactions. Atomicity and persistence are realized by undo and redo logs.

1.5 undo and redo logs in the entire database system, there are two kinds of database logs related to transactions: undo and redo logs.

1.5.1 undo log

Transactions have atomicity. If the transaction fails to execute, the data needs to be rolled back. Atomicity is realized by undo log.

Undo log in order to meet the atomicity of transactions, first back up the data to undo log before operating any data. Then modify the data. If an error occurs or the user executes the rollback statement, the system can use the backup in the undo log to restore the data to the state before the transaction starts.

1.5.2 redo log

In contrast to undo log, redo log records the backup of new data.

undo+redo transaction execution process:

Suppose there are two pieces of data A and B, the values are 0 and 1 respectively, and are modified to 1 and 2 respectively:

  1. Start transaction

  2. Record A=0 to undo log buffer

  3. Modify A=1 to memory

  4. Record A=1 to redo log buffer

  5. Record B=1 to undo log buffer

  6. Modify B=2 to memory

  7. Record B=2 to redo log cache

  8. Write undo log to disk

  9. Write redo log to disk

  10. Commit transaction

In the above steps, if an error occurs in any step, you can roll back the undo log to ensure atomicity. At the same time, writing the log to the disk ensures persistence!

Before the transaction is committed, you only need to persist the redo log. There is no need to write data to the disk for persistence, which reduces the number of IO.

Because the data has been written to the memory before, even if the data is not written to the disk after the transaction is committed, the data we read from the memory is still correct. After the transaction is committed, the database will asynchronously brush the data in the memory into the disk (you can also set a fixed frequency to brush the new memory data to the disk).

To sum up, undo log records the data before update to ensure transaction atomicity. redo log records the updated data to ensure transaction persistence.

2 distributed transaction

Distributed transactions refer to transactions that are not generated under a single service or a single database architecture.

Single database and single table are not enough to support the existing business data, so they can be divided into databases and tables, resulting in distributed transactions across data sources.

With the expansion of the business system, the single architecture has gradually become the bottleneck of business development. The demand for solving the problem of high coupling and scalability of the system is becoming stronger and stronger. Therefore, the single business system is divided into multiple business systems, which reduces the coupling degree between each system, makes different business systems focus on their own business, and then cross service distributed transactions are generated.

In the actual production environment, distributed transactions often span both data sources and services. For example, the most common single payment in e-commerce systems includes the following behaviors:

  • Create order

  • Deduct commodity inventory

  • Deduct amount from account balance

To complete the above operations, you need to access three different services and databases:

In the distributed environment, some operations are successful and some operations fail. For example, the order is generated and the inventory is deducted, but the balance of the user account is insufficient, resulting in inconsistent data.

Order creation, inventory deduction and account deduction are local transactions in each service and database, which can ensure the ACID principle.

However, when we take the three things as a whole and ensure the atomicity of the "business", either all operations succeed or all fail. Partial success and partial failure are not allowed. This is the transaction under the distributed system.

At this time, ACID is difficult to meet, which is the problem to be solved by distributed transactions!

Several solutions of distributed transaction

Before talking about the solution, we have to mention the CAP theorem and BASE theory, which can let us understand why it is difficult to meet the ACID principle of transactions in distributed systems.

1 CAP theorem

In 1998, Eric Brewer, a computer scientist at the University of California, proposed that distributed systems have three indicators:

  • Consistency

  • Availability

  • Partition tolerance

Their initials are C, A and P. Eric Brewer said that these three indicators cannot be met at the same time, but only two can be met at the same time. This conclusion is called CAP theorem:

1.1 Partition tolerance

Most distributed systems are distributed in multiple subnetworks. Each subnet is called a partition. Partition fault tolerance means that interval communication may fail. For example, one server is located in Beijing and the other server is located in Shanghai. They belong to two districts and may not be able to communicate due to network problems.

Generally speaking, for distributed systems, partition fault tolerance is necessary. Therefore, it can be considered that P of CAP is always true, so only one of the remaining C and A can be selected.

1.2 Consistency

Consistency refers to the read operation after the write operation. This value must be returned. For example, if a record is v0, the user wants node G1 to initiate a write operation and change it to v1. Next, no matter which node the user reads the record from, the returned value should be v1, otherwise the consistency will not be satisfied.

1.3 Availability

Availability means that as long as the service receives the user's request, it must respond whether it is right or wrong. The user can initiate a read request to node G1 or G2. At this time, G1 or G2 must tell the user whether the read value is v0 or v1, otherwise it will not meet the availability.

1.4 contradiction between consistency and availability

Consistency and availability, why can't they be established at the same time? The answer is simple, because there may be communication failure or delay.

If the consistency of G2 is guaranteed, G1 must lock the read and write operations of G2 during write operations. Only after data synchronization can read and write be reopened. During locking, G2 cannot read or write and has no availability.

On the contrary, if the availability of G2 is guaranteed, G2 cannot be locked, so the consistency is not established.

In conclusion, G2 cannot achieve consistency and availability at the same time. Only one target can be selected during system design. If consistency is pursued, the availability of all nodes cannot be guaranteed; If you pursue the availability of all nodes, you can't achieve consistency.

2 BASE theory

BASE here is the abbreviation of three English words:

  • Basically Available

  • Soft state

  • Eventually consistent

We solve distributed transactions according to the above theory. For example, order service, inventory service, user service and their corresponding databases are the three parts of distributed application.

  • CP mode: if you want to meet the strong consistency of transactions, you must lock the inventory service and user service data resources at the same time as the order service database. Wait until all three service businesses are processed before resources can be released. At this time, if there are other requests to operate, the locked resources will be blocked, which means that the CP is satisfied, which is strong consistency and weak availability

  • AP mode: the corresponding databases of the three services execute their own businesses independently and execute local transactions without locking resources with each other. However, in this intermediate state, when we access the database, we may encounter inconsistent data. However, we need to take some supplementary measures to ensure that the data finally meets the consistency after a period of time, which is high availability but weak consistency (final consistency).

From the above two ideas, many distributed transaction solutions have been extended:

  • XA

  • TCC

  • Reliable messages are ultimately consistent

  • AT

2.1 XA (two-stage submission)

One of the solutions to distributed transactions is the Two-Phase Commit protocol (2PC). So what is the Two-Phase Commit protocol? In 1994, the X/Open organization (now the Open Group) defined the DTP Model of distributed transaction processing. The model includes the following roles:

  • Application (AP): our microservices

  • Transaction manager (TM): global transaction manager

  • Resource Manager (RM): typically a database

  • Communication resource manager (CRM): it is the communication middleware between TM and RM

In this model, a distributed transaction (global transaction) can be divided into many local transactions and run on different AP and RM. The ACID of each local transaction is well implemented, but the global transaction must ensure that each local transaction contained in it can succeed at the same time. If one local transaction fails, all other transactions must be rolled back. However, the problem is that the running status of other transactions is not known during local transaction processing. Therefore, it is necessary to notify each local transaction through CRM to synchronize the execution status of the transaction.

Therefore, there must be a unified standard for the communication of each local transaction, otherwise there will be no communication between different databases. XA is the interface specification between communication middleware and TM in X/Open DTP. It defines interfaces for notifying transaction start, commit, terminate and rollback. All database manufacturers must implement these interfaces.

The two-phase commit protocol came into being, that is, the global transaction is divided into two phases to execute:

  • Stage 1: preparation stage. Each local transaction completes the preparation of local transactions.

  • Phase 2: execution phase. Each local transaction is committed or rolled back according to the execution results of the previous phase.

The whole process requires a coordinator and a voter.

Normal conditions:

Voting stage: the coordination group asks each transaction participant whether the transaction can be executed. Each transaction participant executes the transaction, writes the redo and undo logs, and then feeds back the information of successful transaction execution (agree)

Commit phase: the coordination group finds that each participant can execute a transaction (agree), so it sends a commit instruction to each transaction participant, and each transaction participant submits a transaction.

Abnormal conditions:

Voting stage: the coordination group asks each transaction participant whether the transaction can be executed. Each transaction participant executes the transaction, writes the redo and undo logs, and then feeds back the transaction execution results. However, as long as one participant returns Disagree, the execution fails.

Submission phase: the coordination group finds that one or more participants return Disagree and considers the implementation failed. Therefore, an abort instruction is issued to each transaction participant, and each transaction participant rolls back the transaction.

Defects submitted in two stages:

  • Single point of failure: suppose the coordinator and voter3 both crash at the commit stage, but voter1 and voter2 do not receive the commit message. At this time, voter1 and voter2 are in a dilemma. Because they can't judge whether the last round of voting passed unanimously, and then voter3 first received the commit message and crashed after the commit operation, or whether the last round of voter3 voting failed.

  • Blocking problem: in the preparation phase and submission phase, each transaction participant will lock local resources and wait for the execution results of other transactions. The blocking time is long and the resource locking time is too long, so the execution efficiency will be relatively low.

Usage scenario:

It has strong consistency requirements for transactions, is not sensitive to transaction execution efficiency, and does not want too much code intrusion. However, due to the blocking and resource locking problems of two-stage submission, and XA's current support for commercial databases is ideal, mysql support is not ideal, so there are few actual use scenarios.

2.2 TCC

TCC mode can solve the resource locking and blocking problems in 2PC and reduce the resource locking time. Its essence is a compensation idea.

The transaction running process includes three methods:

  • Try: resource detection and reservation

  • Confirm: submit the executed business operation; Try and confirm must be successful

  • Cancel: if it fails, release the reserved resources

The implementation is divided into two phases:

  • try: resource detection and reservation

  • confirm/cancel: judge the following execution methods according to the results of the previous step. If all transaction participants in the previous step are successful, execute confirm here; otherwise, execute cancel

Take the deduction balance in the previous order business as an example to see how to write the next three different methods. Assuming that the original balance of account A is 100, the balance needs to be deducted by 30 yuan. As shown in the figure:

Advantages and disadvantages:

Advantages: at each stage of TCC execution, local transactions are committed and locks are released, and there is no need to wait for the execution results of other transactions. If other transactions fail, the compensation operation is performed instead of rollback. In this way, the long-term locking and blocking waiting of resources are avoided, and the execution efficiency is relatively high. It belongs to a distributed transaction mode with better performance.

Disadvantages: Code intrusion: you need to manually write code to realize try, confirm and cancel, and there are many code intrusions; High development cost: a business needs to be divided into three steps to write business implementation respectively, which is more complex; Security considerations: if the cancel action fails to execute, the resources cannot be released, and a retry mechanism needs to be introduced. Retry may lead to repeated execution, and the idempotency problem during retry should also be considered.

Usage scenario: there are certain consistency requirements for transactions (final consistency); High performance requirements; Developers have high coding ability and idempotent processing experience. In fact, it is not commonly used in actual development.

2.3 reliable message service

The idea of this implementation is actually derived from ebay. Its basic design idea is to split the remote distributed transaction into a series of local transactions.

It is generally divided into initiator A of the transaction and other participants B of the transaction:

  • Transaction initiator A executes A local transaction

  • Transaction initiator A sends the transaction information to be executed to transaction participant B through MQ

  • Transaction participant B executes the local transaction after receiving the message

Several precautions:

  • Transaction initiator A must ensure that the message will be sent successfully after the local transaction is successful

  • MQ must ensure that messages are delivered correctly and persisted

  • Transaction participant B must ensure that the message can eventually be consumed. If it fails, it needs to retry multiple times

  • If transaction B fails to execute, it will retry, but it will not cause transaction A to roll back

So how do we ensure that the message is sent successfully? How to ensure that consumers will receive the message? The answer is to use the local message table to persist messages to the database:

Transaction initiator:

  • Start local transaction

  • Execute transaction related business

  • Send message to MQ

  • Persist the message to the database and mark it as sent

  • Commit local transaction

Transaction recipient:

  • receive messages

  • Start local transaction

  • Handle transaction related business

  • Modify database message status to consumed

  • Commit local transaction

Additional scheduled tasks:

  • Timeout non consumption message in scheduled scan table, resend

Advantages and disadvantages:

Advantages: compared with tcc, the implementation method is relatively simple and the development cost is low.

Disadvantages: data consistency completely depends on the message service, so the message service must be reliable; The idempotent problem of the passive business party needs to be handled; The failure of passive service will not cause the rollback of active service, but retry the passive service; The transaction business is coupled with the message sending business, and the business data and message table should be together.

In order to solve the above problems, we will introduce an independent message service to complete a series of behaviors such as message persistence, sending, confirmation, failure retry, etc. the general model is as follows:

The sequence diagram is as follows:

Basic execution steps of transaction initiator A:

  • Start local transaction

  • Notify the message service to prepare to send the message (the message service persists the message and marks it as ready to send)

  • Execute local business: if execution fails, terminate, notify the message service and cancel sending (the message service modifies the order status); If the execution is successful, continue, notify the message service and confirm the sending (the message service sends a message and modifies the order status)

  • Commit local transaction

The message service itself provides the following interfaces:

  • Ready to send: persist the message to the database and mark the status as ready to send

  • Cancel sending: change the database message status to cancel

  • Confirm sending: change the database message status to confirm sending. Try to send a message. After success, modify the status to sent

  • Confirm consumption: the consumer has received and processed the message and changed the database message status to consumed

  • Scheduled task: regularly scan the message whose status in the database is confirm to send, and then ask the corresponding transaction initiator whether the transaction execution is successful. Result: business execution is successful: try to send the message, and modify the status to sent after success; Business execution failed: change the database message status to cancel

Basic steps of transaction participant B:

  • receive messages

  • Start local transaction

  • Execute business

  • Notify the message service that the message has been received and processed

  • Commit transaction

Advantages and disadvantages:

Advantages: it decouples transaction services from message related services.

Disadvantages: complex implementation.

Message confirmation of RabbitMQ:

RabbitMQ has a strange idea of ensuring that messages are not lost. Instead of using traditional local tables, RabbitMQ uses the message confirmation mechanism.

Producer confirmation mechanism: ensure that there is no problem when messages arrive at MQ from the producer:

  • When message producers send messages to RabbitMQ, they can set up an asynchronous listener to listen for ACK from MQ

  • After MQ receives the message, it will return a receipt to the producer: after the message reaches the switch, if the routing fails, it will return a failed ack; If message routing succeeds and persistence fails, a failed ack will be returned; If the message is successfully routed and persisted, an ACK will be returned

  • The producer prepares different receipt processing methods in advance: failure receipt: wait for a certain time and resend; Successful receipt: log and other behaviors

Consumer confirmation mechanism: ensure that messages can be correctly consumed by consumers:

  • The consumer needs to specify the manual ACK mode when listening to the queue

  • After RabbitMQ delivers the message to the consumer, it will wait for the consumer's ACK and delete the message after receiving the ACK. If the ACK is not received, the message will remain on the server. If the consumer is disconnected or abnormal, the message will be delivered to other consumers.

  • After the consumer has processed the message and submitted the transaction, it can manually ack. If an exception is thrown during execution, an ACK will not be sent, and the business processing fails, waiting for the next message

Through the above two confirmation mechanisms, the message security from the message producer to the consumer can be ensured. Combined with the local transactions at both ends of the producer and consumer, the final consistency of a distributed transaction can be guaranteed.

Advantages and disadvantages of message transaction:

Summarizing the above models, the advantages and disadvantages of message transaction are as follows:

Advantages: the business is relatively simple, and there is no need to write three-stage business; It is a combination of multiple local transactions, so the resource locking cycle is short and the performance is good

Disadvantages: Code intrusion; Reliability dependent on MQ; The message initiator can roll back, but the message participant cannot cause the transaction to roll back; The timeliness of transactions is poor, which depends on whether MQ messages are sent in time and the execution of message participants

For the problem that a transaction cannot be rolled back, it was proposed that after the transaction participants failed to execute, MQ can be used to notify the message service again, and then the message service can notify other participants to roll back. Well, Congratulations, you have implemented the 2PC model again by using MQ and custom message services, and created a big wheel!

Usage scenario: there are certain consistency requirements for transactions (final consistency) to ensure the stability and reliability of message services.

2.4 AT

In January 2019, Seata opened the AT mode. AT mode is a non intrusive distributed transaction solution. It can be regarded as an optimization of TCC or two-stage submission model, which solves the problems of code intrusion and complex coding in TCC mode.

In the AT mode, users only need to pay attention to their own "business SQL". The user's "business SQL" is used as a phase, and the Seata framework will automatically generate two-phase commit and rollback operations of transactions.

Basic principles

Let's take a look at a flow chart:

Does it feel like the implementation of TCC is divided into two stages:

  • Phase 1: execute local transactions and return execution results

  • Phase II: judge the phase II method according to the results of phase I: commit or rollback

But the bottom layer of AT mode does completely different things, and the second stage doesn't need us to write AT all. It's all implemented by Seata itself. In other words, the code we write is the same as the code for local transactions, and there is no need to manually process distributed transactions.

Then, how does the AT mode realize no code intrusion and how can it help us automatically realize the two-stage code?

In the first stage, Seata will intercept "business SQL". First, it will parse the SQL semantics, find the business data to be updated by "business SQL", save it as "before image" before the business data is updated, then execute "business SQL" to update the business data, and then save it as "after image" after the business data is updated. Finally, it will obtain the global row lock and submit the transaction. All the above operations are completed in one database transaction, which ensures the atomicity of one-stage operation.

The before image and after image here are similar to the undo and redo logs of the database, but they are actually simulated by the database:

If the second phase is rollback, Seata needs to rollback the "business SQL" executed in the first phase to restore the business data. The rollback method is to restore business data with "before image"; However, before restoring, first verify the dirty write. Compare the "database current business data" and "after image". If the two data are completely consistent, it means that there is no dirty write. You can restore the business data. If they are inconsistent, it means that there is dirty write. If there is dirty write, you need to transfer it to manual processing:

However, because of the global locking mechanism, the probability of dirty writes can be reduced.

The one-stage, two-stage commit and rollback of AT mode are automatically generated by Seata framework. Users can easily access distributed transactions by writing "business SQL". AT mode is a distributed transaction solution without any intrusion into business.

Some basic concepts in Seata:

  • TC (Transaction Coordinator) - Transaction Coordinator: maintains the status of global and branch transactions and drives global transaction commit or rollback (coordinator between TM).

  • TM (Transaction Manager) - Transaction Manager: defines the scope of global transactions: start global transactions, commit or roll back global transactions.

  • RM (Resource Manager) - Resource Manager: manages the resources of branch transaction processing, talks with TC to register branch transactions and report the status of branch transactions, and drives branch transaction submission or rollback.

Phase I:

  • TM starts global transactions and declares global transactions to TC, including global transaction XID information

  • TM service calls other micro services

  • Microservices are mainly executed by RM: query before_image; Execute local affairs; Query after_image; Generate undo_log and write to the database; Register branch transactions with TC; Inform the transaction execution results; Obtain the global lock (prevent other global transactions from modifying the current data concurrently); Release the local lock (does not affect the operation of other services on data)

  • After all services are executed, the transaction initiator (TM) will attempt to submit a global transaction to the TC

Phase II:

  • TC counts the execution of branch transactions and judges the next step according to the results: branches are successful: notify branch transactions and commit transactions; Branch execution failure: notify the successful branch transaction and roll back the data

  • RM of branch transaction: commit transaction: clear before directly_ Image and after_image information, release the global lock; Rollback transaction: verify after_image to judge whether there is dirty writing; If there is no dirty write, rollback the data to before_image, clear before_image and after_image; If there are dirty writes, request manual intervention

Advantages and disadvantages

Advantages: compared with 2PC (XA): each branch transaction is committed independently without holding the database lock until the end of the transaction. Instead, it is committed by default at one stage, reducing the time of resource blocking. At this time, if other transactions query the data of unfinished transactions, they will inevitably find a dirty data (because the data status cannot be determined, the data may be rolled back) , in the implementation of Seata, the proxy select for update statement will be used to query the TC side whether the data is currently being operated. If not, the data can be read immediately, and the being operated will be blocked like XA to achieve strong consistency. Compared with TCC, the two-stage execution operations are all generated automatically, without code intrusion and low development cost.

Disadvantages: compared with TCC, it needs to dynamically generate two-stage reverse compensation operation, and the execution performance is slightly lower than that of TCC.

Usage scenario

It is applicable to most scenarios. If there are no strict requirements on system performance, it is recommended. AT performance is between XA and TCC. It is a perfect solution to learn from each other.

seata actual combat

AT mode is the most commonly used in Seata. Here we take AT mode as a demonstration to see how to integrate Seata in spring cloud microservices.

We assume the business logic of a user purchasing goods. The whole business logic is supported by three micro services:

  • Warehousing service: deduct the warehousing quantity for a given commodity

  • Order service: create orders according to purchase requirements

  • Account service: deduct balance from user account

flow chart:

When the order service places an order, it calls the inventory service and user service at the same time. At this time, the problem of distributed transactions across services and data sources will occur.

1 data preparation

Create the database seata_demo and execute the following sql:

​SET NAMES utf8mb4;
-- ----------------------------
-- Table structure for account_tbl
-- ----------------------------
DROP TABLE IF EXISTS `account_tbl`;
CREATE TABLE `account_tbl`  (
`user_id` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL,
`money` int(11) UNSIGNED NULL DEFAULT 0,
) ENGINE = InnoDB CHARACTER SET = utf8 COLLATE = utf8_general_ci ROW_FORMAT = Compact;
-- ----------------------------
-- Records of account_tbl
-- ----------------------------
INSERT INTO `account_tbl` VALUES (1, 'user202003032042012', 1000);
-- ----------------------------
-- Table structure for order_tbl
-- ----------------------------
CREATE TABLE `order_tbl`  (
`user_id` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL,
`commodity_code` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL,
`count` int(11) NULL DEFAULT 0,
`money` int(11) NULL DEFAULT 0,
) ENGINE = InnoDB CHARACTER SET = utf8 COLLATE = utf8_general_ci ROW_FORMAT = Compact;
-- ----------------------------
-- Table structure for storage_tbl
-- ----------------------------
DROP TABLE IF EXISTS `storage_tbl`;
CREATE TABLE `storage_tbl`  (
`commodity_code` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL,
`count` int(11) UNSIGNED NULL DEFAULT 0,
UNIQUE INDEX `commodity_code`(`commodity_code`) USING BTREE
) ENGINE = InnoDB CHARACTER SET = utf8 COLLATE = utf8_general_ci ROW_FORMAT = Compact;
-- ----------------------------
-- Records of storage_tbl
-- ----------------------------
INSERT INTO `storage_tbl` VALUES (1, '100202003032041', 10);
-- ----------------------------
-- Table structure for undo_log
-- ----------------------------
CREATE TABLE `undo_log`  (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`branch_id` bigint(20) NOT NULL,
`xid` varchar(100) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL,
`context` varchar(128) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL,
`rollback_info` longblob NOT NULL,
`log_status` int(11) NOT NULL,
`log_created` datetime(0) NOT NULL,
`log_modified` datetime(0) NOT NULL,
`ext` varchar(100) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL,
UNIQUE INDEX `ux_undo_log`(`xid`, `branch_id`) USING BTREE
) ENGINE = InnoDB CHARACTER SET = utf8 COLLATE = utf8_general_ci ROW_FORMAT = Compact;

It includes order table, storage_tbl table, account_tbl table and undo_log (the transaction log table in Seata contains after_image and before_image data for data rollback)

Open the Seata demo microservice project with idea. The source code can be obtained at the end of the article. The project structure is as follows:

Structure description:

  • Account service: user service, which provides the function of operating user account balance, port 8083

  • Eureka server: registry, port 8761

  • Order service: order service, which provides the function of creating orders based on data, port 8082

  • Storage service: storage service, which provides the function of deducting commodity inventory, port 8081

2 prepare TC service. We talked about the principle of Seata before, including three important roles:

  • TC: Transaction Coordinator

  • TM: transaction managertm

  • RM: Resource Manager

Among them, TC is an independent service responsible for coordinating various branch transactions, while TM and RM are integrated in various transaction participants through jar packages.

Therefore, first of all, we need to build an independent TC service.

First, go to the official website to download the server installation package of TC, which can also be obtained at the end of the text. Here we use the installation package of version 1.1.0:

Unzip. The directory structure is as follows:

The core configuration of Seata is mainly composed of two parts:

  • Registry configuration: in the conf directory, it is usually the registry.conf file

  • The current service can be configured in two ways: through the unified configuration center of distributed services, such as eureka; and through local files

Open registry.conf to configure:

registry {
  # Specify the registry type, where the eureka type is used
  type = "eureka"
  # Various registry configurations. eureka and Zookeeper are retained here
  eureka {
    serviceUrl = ""
    application = "seata_tc_server"
    weight = "1"
  zk {
    cluster = "default"
    serverAddr = ""
    session.timeout = 6000
    connect.timeout = 2000
config {
  # The configuration file mode supports file, nacos, apollo, zk, consumer and etcd3
  type = "file"
  nacos {
    serverAddr = "localhost"
    namespace = ""
    group = "SEATA_GROUP"
  zk {
    serverAddr = ""
    session.timeout = 6000
    connect.timeout = 2000
  file {
    name = "file.conf"

This file is mainly configured with two contents:

  • The type and address of the registration center. In this example, we choose eureka as the registration center

  • The type and address of the configuration center. In this example, we select the local file for configuration and configure it in the file.conf file in the current directory

Look at the file.conf file again:

## transaction log store, only used in seata-server
store {
  ## store mode: file,db
  mode = "file"
  ## file store property
  file {
    ## store location dir
    dir = "sessionStore"
    # branch session size , if exceeded first try compress lockkey, still exceeded throws exceptions
    maxBranchSessionSize = 16384
    # globe session size , if exceeded throws exceptions
    maxGlobalSessionSize = 512
    # file buffer size , if exceeded allocate new buffer
    fileWriteBufferCacheSize = 16384
    # when recover batch read size
    sessionReloadReadSize = 100
    # async, sync
    flushDiskMode = async
  ## database store property
  db {
    ## the implement of javax.sql.DataSource, such as DruidDataSource(druid)/BasicDataSource(dbcp) etc.
    datasource = "dbcp"
    ## mysql/oracle/h2/oceanbase etc.
    dbType = "mysql"
    driverClassName = "com.mysql.jdbc.Driver"
    url = "jdbc:mysql://"
    user = "root"
    password = "123"
    minConn = 1
    maxConn = 10
    globalTable = "global_table"
    branchTable = "branch_table"
    lockTable = "lock_table"
    queryLimit = 100

Key configurations:

store: server data storage configuration of TC

Mode: data storage mode, which supports two types: file and db

File: storing data in a local file has good performance, but does not support horizontal expansion. db: to save data in a specified database, you need to specify database connection information. If you use a file as the storage medium, you can run it directly without other configurations. If you use db as the storage medium, you also need to create three tables in the database:

    `xid`                       VARCHAR(128) NOT NULL,
    `transaction_id`            BIGINT,
    `status`                    TINYINT      NOT NULL,
    `application_id`            VARCHAR(32),
    `transaction_service_group` VARCHAR(32),
    `transaction_name`          VARCHAR(128),
    `timeout`                   INT,
    `begin_time`                BIGINT,
    `application_data`          VARCHAR(2000),
    `gmt_create`                DATETIME,
    `gmt_modified`              DATETIME,
    PRIMARY KEY (`xid`),
    KEY `idx_gmt_modified_status` (`gmt_modified`, `status`),
    KEY `idx_transaction_id` (`transaction_id`)
-- the table to store BranchSession data
    `branch_id`         BIGINT       NOT NULL,
    `xid`               VARCHAR(128) NOT NULL,
    `transaction_id`    BIGINT,
    `resource_group_id` VARCHAR(32),
    `resource_id`       VARCHAR(256),
    `branch_type`       VARCHAR(8),
    `status`            TINYINT,
    `client_id`         VARCHAR(64),
    `application_data`  VARCHAR(2000),
    `gmt_create`        DATETIME,
    `gmt_modified`      DATETIME,
    PRIMARY KEY (`branch_id`),
    KEY `idx_xid` (`xid`)
-- the table to store lock data
    `row_key`        VARCHAR(128) NOT NULL,
    `xid`            VARCHAR(96),
    `transaction_id` BIGINT,
    `branch_id`      BIGINT       NOT NULL,
    `resource_id`    VARCHAR(256),
    `table_name`     VARCHAR(32),
    `pk`             VARCHAR(36),
    `gmt_create`     DATETIME,
    `gmt_modified`   DATETIME,
    PRIMARY KEY (`row_key`),
    KEY `idx_branch_id` (`branch_id`)

Next, start the eurka service in the idea, enter the bin directory after startup, and start the TC. In the linux environment, execute seata-server.sh, and in the window environment, execute seata-server.bat (jdk must be installed in the environment). As follows, the TC is started successfully:

3. Transform micro services

The next step is the transformation of micro services. No matter which micro service, as long as it is a participant in the transaction, the steps are basically the same.

First, transform the order service:

We have managed dependencies in the parent project Seata Demo:


Therefore, we can introduce dependent coordinates into the pom file of the project order service:


Add a line of configuration in application.yml:

        tx-service-group: test_tx_group # Defines the name of the transaction group

Here is the name of the defined transaction group, which will be used next.

Then put two configuration files in the resources directory: file.conf and registry.conf. Registry.conf is the same as that of the TC server, which will not be explained here.

Let's look at file.conf:

transport {
  # tcp udt unix-domain-socket
  type = "TCP"
  server = "NIO"
  #enable heartbeat
  heartbeat = true
  # the client batch send request enable
  enableClientBatchSendRequest = true
  #thread factory for netty
  threadFactory {
    bossThreadPrefix = "NettyBoss"
    workerThreadPrefix = "NettyServerNIOWorker"
    serverExecutorThread-prefix = "NettyServerBizHandler"
    shareBossWorker = false
    clientSelectorThreadPrefix = "NettyClientSelector"
    clientSelectorThreadSize = 1
    clientWorkerThreadPrefix = "NettyClientWorkerThread"
    # netty boss thread size,will not be used for UDT
    bossThreadSize = 1
    #auto default pin or 8
    workerThreadSize = "default"
  shutdown {
    # when destroy server, wait seconds
    wait = 3
  serialization = "seata"
  compressor = "none"
service {
  vgroupMapping.test_tx_group = "seata_tc_server"
  #only support when registry.type=file, please don't set multiple addresses
  seata_tc_server.grouplist = ""
  #degrade, current not support
  enableDegrade = false
  #disable seata
  disableGlobalTransaction = false
client {
  rm {
    asyncCommitBufferLimit = 10000
    lock {
      retryInterval = 10
      retryTimes = 30
      retryPolicyBranchRollbackOnConflict = true
    reportRetryCount = 5
    tableMetaCheckEnable = false
    reportSuccessEnable = false
  tm {
    commitRetryCount = 5
    rollbackRetryCount = 5
  undo {
    dataValidation = true
    logSerialization = "jackson"
    logTable = "undo_log"
  log {
    exceptionRate = 100

Configuration interpretation:

  • transport: some configurations for interacting with TC

  • Heartbeat: heartbeat detection switch for client and server communication

  • enableClientBatchSendRequest: whether the client transaction message request is sent in batches

  • service: address configuration of TC, used to obtain the address of TC

  • test_tx_group: is the name of the transaction group, which should be consistent with the configuration in application.yml,

  • seata_tc_server: is the id of the TC server in the registry. In the future, the TC address will be obtained through the registry

  • enableDegrade: service degradation switch. It is off by default. If enabled, the global transaction will be abandoned after multiple business retries fail

  • disableGlobalTransaction: global transaction switch. The default is false. False is on and true is off

  • seata_tc_server.grouplist: This is only used when the registry is file

  • Client: client configuration

  • exceptionRate: log recording frequency when rollback exception occurs. The default is 100, with a probability of 1%. Rollback failure is basically dirty data, and there is no need to output stack to occupy hard disk space

  • dataValidation: whether to enable two-stage rollback image verification. The default value is true

  • logSerialization: undo serialization method, Jackson by default

  • logTable: user defined undo table name. The default is undo_log

  • commitRetryCount: the number of TC retries reported for the global submission results in the first stage. The default is 1

  • rollbackRetryCount: the number of TC retries reported by the global rollback result in the first stage. The default is 1

  • asynCommitBufferLimit: two-phase commit is asynchronous by default. Here, the size of the asynchronous queue is specified

  • Lock: global lock configuration

  • reportRetryCount: the number of retries after the failure of reporting TC of phase I results. The default is 5

  • retryInterval: the retry interval for verifying or occupying the global lock. The default value is 10. The unit is milliseconds

  • retryTimes: the number of retries to verify or occupy the global lock. The default is 30

  • retryPolicyBranchRollbackOnConflict: when a branch transaction conflicts with other global rollback transactions, the lock policy is true by default. The local lock is released first to make the rollback successful

  • rm: Resource Manager Configuration

  • tm: transaction manager configuration

  • undo: undo_log configuration

  • Log: log configuration

In the second phase of execution of Seata, the rollback strategy is specified by intercepting sql statements and analyzing semantics. Therefore, it is necessary to proxy the DataSource. In the demo.order.config package of the project, we add a configuration class:

package demo.order.config;
import com.baomidou.mybatisplus.extension.spring.MybatisSqlSessionFactoryBean;
import io.seata.rm.datasource.DataSourceProxy;
import org.apache.ibatis.session.SqlSessionFactory;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import javax.sql.DataSource;
public class DataSourceProxyConfig {
    public SqlSessionFactory sqlSessionFactoryBean(DataSource dataSource) throws Exception {
        // Mybatis plus is introduced into the order service, so a special SqlSessionFactoryBean is used
        MybatisSqlSessionFactoryBean sqlSessionFactoryBean = new MybatisSqlSessionFactoryBean();
        // Proxy data source
        sqlSessionFactoryBean.setDataSource(new DataSourceProxy(dataSource));
        // Generate SqlSessionFactory
        return sqlSessionFactoryBean.getObject();

Then give the transaction initiator order_ Add the @ GlobalTransactional annotation to the create() method in the OrderServiceImpl of the service to start the global transaction:

//Add GlobalTransactional annotation
public Long create(Order order) {
    // Create order
    try {
          // Deduct inventory
        storageClient.deduct(order.getCommodityCode(), order.getCount());
          // Deduction
        accountClient.debit(order.getUserId(), order.getMoney());
    } catch (FeignException e) {
        log.error("Order failed, reason::{}", e.contentUTF8());
        throw new RuntimeException(e.contentUTF8());
        return order.getId();

Just restart.

Then transform the storage and account services

Similar to OrderService, the following steps are required:

  • Import dependency: consistent with order service, omitted

  • Add configuration file: consistent with order service, omitted

  • Proxy DataSource, consistent with order service, omitted

The transaction annotation uses @ transactional instead of @ GlobalTransactional, and the transaction initiator needs to add @ GlobalTransactional.

After configuration, restart all microservices. It is better to restart TC. The support in window environment is not very good.

4 test

Current data: user balance 1000, inventory 10:

Normal condition test

We use the swagger request order interface of the order service and click send as follows:

Look at the TC console and you will be prompted that the distributed transaction has ended successfully:

Take a look at the database. The data has been successfully updated. The user balance is 900. Inventory 9:

Abnormal condition test

It should be noted that the balance in the user table is of type int unsigned, that is, if the balance is insufficient and deducted to a negative number, an error will be reported. At this time, transaction rollback will be triggered. We use this to test the rollback when distributed transactions generate exceptions:

We use the swagger of the order service to request the order interface, adjust the value of the money field to 1200, and click send. As follows, the interface reports an error and prompts that the balance is insufficient:

Take a look at the TC console to show that the transaction has been rolled back:

Take a look at the database, user balance 900, inventory 9, transaction rollback successful:

To sum up, the distributed transaction test is successful! We use seata to implement distributed transactions across service levels. People across databases can try it by themselves. The principle is the same. It should be noted that when testing, we need to start eurka service first, and then TC and other services. This is not easy to cause problems.


If you can insist on seeing here, I believe you will gain something and have a deeper understanding of distributed transactions. If you don't have practical partners, you'd better practice. After all, practice is the only standard to test the truth. In the actual project, we should judge whether it is necessary to use distributed transactions according to the business and requirements, because no matter how excellent the framework is, the performance of distributed transactions will decline as long as it is used. In addition, we should also consider the problems of strong consistency and final consistency. General businesses can meet the final consistency, However, businesses related to money often need to meet strong consistency. Now using mq or seata is the two mainstream ways to realize distributed transactions. seata is not a panacea. If systems with strict performance requirements are involved, seata still seems weak. In this case, we need to make some transformation and optimization according to the business.

Well, this article ends here. I'll see you next time!

Concerned about the official account spiral programming geeks distributed transaction can get the demo source code and other information.

Tags: Java Database Spring Boot Interview Distribution

Posted on Mon, 01 Nov 2021 19:35:33 -0400 by l3asturd