2-seata deployment and integration

Deployment and integration of seata 1, Deploying the TC server of Seata ...
1. Download
2. Decompression
3. Modify configuration
4. Add configuration in nacos
5. Create database tables
6. Start TC service
1. Introduce dependency
2. Modify the configuration file
1. TC cluster simulating remote disaster recovery
2. Configure the transaction group mapping to nacos
3. The micro service reads the nacos configuration
Deployment and integration of seata 1, Deploying the TC server of Seata

1. Download

First, we need to download the Seata Server package at http😕/seata.io/zh-cn/blog/download.html

Of course, the pre class materials are also ready:

[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-anf9j4ug-1636541163279)(assets/image-20210622202357640.png)]

2. Decompression

Unzip the zip package in a non Chinese directory. Its directory structure is as follows:

[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-jW1uvFel-1636541163280)(assets/image-20210622202515014.png)]

3. Modify configuration

Modify the registry.conf file in the conf Directory:

[the external chain picture transfer fails, and the source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-1wkcocs-1636541163280) (assets / image-20210622202622874. PNG)]

The contents are as follows:

registry { # For the registry class of tc service, select nacos here, or eureka, zookeeper, etc type = "nacos" nacos { # seata tc service is registered with the service name of nacos, which can be customized application = "seata-tc-server" serverAddr = "127.0.0.1:8848" group = "DEFAULT_GROUP" namespace = "" cluster = "SH" username = "nacos" password = "nacos" } } config { # The way to read the configuration file of the tc server is to read it from the nacos configuration center, so that if the tc is a cluster, the configuration can be shared type = "nacos" # Configure nacos address and other information nacos { serverAddr = "127.0.0.1:8848" namespace = "" group = "SEATA_GROUP" username = "nacos" password = "nacos" dataId = "seataServer.properties" } }

4. Add configuration in nacos

In particular, in order to enable the cluster of tc services to share configuration, we chose nacos as the unified configuration center. Therefore, the server configuration file seataServer.properties needs to be configured in nacos.

The format is as follows:

[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-4nFTBDsP-1636541163281)(assets/image-20210622203609227.png)]

The configuration contents are as follows:

# Data storage mode, db represents database store.mode=db store.db.datasource=druid store.db.dbType=mysql store.db.driverClassName=com.mysql.jdbc.Driver store.db.url=jdbc:mysql://127.0.0.1:3306/seata?useUnicode=true&rewriteBatchedStatements=true store.db.user=root store.db.password=123 store.db.minConn=5 store.db.maxConn=30 store.db.globalTable=global_table store.db.branchTable=branch_table store.db.queryLimit=100 store.db.lockTable=lock_table store.db.maxWait=5000 # Transaction and log configuration server.recovery.committingRetryPeriod=1000 server.recovery.asynCommittingRetryPeriod=1000 server.recovery.rollbackingRetryPeriod=1000 server.recovery.timeoutRetryPeriod=1000 server.maxCommitRetryTimeout=-1 server.maxRollbackRetryTimeout=-1 server.rollbackRetryTimeoutUnlockEnable=false server.undo.logSaveDays=7 server.undo.logDeletePeriod=86400000 # Transmission mode between client and server transport.serialization=seata transport.compressor=none # Turn off the metrics function to improve performance metrics.enabled=false metrics.registryType=compact metrics.exporterList=prometheus metrics.exporterPrometheusPort=9898

The database address, user name and password need to be modified into your own database information.

5. Create database tables

Special note: when managing distributed transactions, the tc service needs to record transaction related data into the database. You need to create these tables in advance.

Create a new database named seata and run the sql file provided in the pre class materials:

[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-hlb0mmHZ-1636541163281)(assets/image-20210622204145159.png)]

These tables mainly record global transactions, branch transactions and global lock information:

SET NAMES utf8mb4; SET FOREIGN_KEY_CHECKS = 0; -- ---------------------------- -- Branch transaction table -- ---------------------------- DROP TABLE IF EXISTS `branch_table`; CREATE TABLE `branch_table` ( `branch_id` bigint(20) NOT NULL, `xid` varchar(128) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL, `transaction_id` bigint(20) NULL DEFAULT NULL, `resource_group_id` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL, `resource_id` varchar(256) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL, `branch_type` varchar(8) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL, `status` tinyint(4) NULL DEFAULT NULL, `client_id` varchar(64) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL, `application_data` varchar(2000) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL, `gmt_create` datetime(6) NULL DEFAULT NULL, `gmt_modified` datetime(6) NULL DEFAULT NULL, PRIMARY KEY (`branch_id`) USING BTREE, INDEX `idx_xid`(`xid`) USING BTREE ) ENGINE = InnoDB CHARACTER SET = utf8 COLLATE = utf8_general_ci ROW_FORMAT = Compact; -- ---------------------------- -- Global transaction table -- ---------------------------- DROP TABLE IF EXISTS `global_table`; CREATE TABLE `global_table` ( `xid` varchar(128) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL, `transaction_id` bigint(20) NULL DEFAULT NULL, `status` tinyint(4) NOT NULL, `application_id` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL, `transaction_service_group` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL, `transaction_name` varchar(128) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL, `timeout` int(11) NULL DEFAULT NULL, `begin_time` bigint(20) NULL DEFAULT NULL, `application_data` varchar(2000) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL, `gmt_create` datetime NULL DEFAULT NULL, `gmt_modified` datetime NULL DEFAULT NULL, PRIMARY KEY (`xid`) USING BTREE, INDEX `idx_gmt_modified_status`(`gmt_modified`, `status`) USING BTREE, INDEX `idx_transaction_id`(`transaction_id`) USING BTREE ) ENGINE = InnoDB CHARACTER SET = utf8 COLLATE = utf8_general_ci ROW_FORMAT = Compact; SET FOREIGN_KEY_CHECKS = 1;

6. Start TC service

Enter the bin directory and run seata-server.bat:

[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-hMCYxj8N-1636541163282)(assets/image-20210622205427318.png)]

After successful startup, Seata server should have registered with the nacos registry.

Open a browser to access the nacos address: http://localhost:8848 , then enter the service list page, and you can see the information of Seata TC server:

[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-Xw1FJdqU-1636541163282)(assets/image-20210622205901450.png)]

2, Microservice integration seata

1. Introduce dependency

First, we need to introduce seata dependency into microservices:

<dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-seata</artifactId> <exclusions> <!--Lower version, 1.3.0,Therefore exclude--> <exclusion> <artifactId>seata-spring-boot-starter</artifactId> <groupId>io.seata</groupId> </exclusion> </exclusions> </dependency> <!--seata starter Adopt 1.4.2 edition--> <dependency> <groupId>io.seata</groupId> <artifactId>seata-spring-boot-starter</artifactId> <version>$</version> </dependency>

2. Modify the configuration file

You need to modify the application.yml file and add some configurations:

seata: registry: # The configuration of the TC service registry. The microservice obtains the TC service address from the registry according to this information # Refer to the configuration in the registry.conf of the tc service type: nacos nacos: # tc server-addr: 127.0.0.1:8848 namespace: "" group: DEFAULT_GROUP application: seata-tc-server # Service name of tc service in nacos cluster: SH tx-service-group: seata-demo # Transaction group, which is used to obtain the cluster name of tc service service: vgroup-mapping: # Mapping relationship between transaction group and TC service cluster seata-demo: SH
3, High availability of TC services and remote disaster recovery

1. TC cluster simulating remote disaster recovery

It is planned to start tc service nodes of two seata s:

Node nameip addressPort numberCluster nameseata127.0.0.18091SHseata2127.0.0.18092HZ

We have started a seata service before. The port is 8091 and the cluster name is SH.

Now, make a copy of the Seata directory and name it seata2

Modify seata2/conf/registry.conf as follows:

registry { # For the registry class of tc service, select nacos here, or eureka, zookeeper, etc type = "nacos" nacos { # seata tc service is registered with the service name of nacos, which can be customized application = "seata-tc-server" serverAddr = "127.0.0.1:8848" group = "DEFAULT_GROUP" namespace = "" cluster = "HZ" username = "nacos" password = "nacos" } } config { # The way to read the configuration file of the tc server is to read it from the nacos configuration center, so that if the tc is a cluster, the configuration can be shared type = "nacos" # Configure nacos address and other information nacos { serverAddr = "127.0.0.1:8848" namespace = "" group = "SEATA_GROUP" username = "nacos" password = "nacos" dataId = "seataServer.properties" } }

Enter the seata2/bin directory and run the command:

seata-server.bat -p 8092

Open the nacos console to view the list of services:

[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-NqKEuxzD-1636541163283)(assets/image-20210624151150840.png)]

Click in details to view:

[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-qfYuax4o-1636541163283)(assets/image-20210624151221747.png)]

2. Configure the transaction group mapping to nacos

Next, we need to configure the mapping relationship between TX service group and cluster to the nacos configuration center.

Create a new configuration:

[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-Wn426mpO-1636541163284)(assets/image-20210624151507072.png)]

The configuration is as follows:

# Transaction group mapping relationship service.vgroupMapping.seata-demo=SH service.enableDegrade=false service.disableGlobalTransaction=false # Communication configuration with TC service transport.type=TCP transport.server=NIO transport.heartbeat=true transport.enableClientBatchSendRequest=false transport.threadFactory.bossThreadPrefix=NettyBoss transport.threadFactory.workerThreadPrefix=NettyServerNIOWorker transport.threadFactory.serverExecutorThreadPrefix=NettyServerBizHandler transport.threadFactory.shareBossWorker=false transport.threadFactory.clientSelectorThreadPrefix=NettyClientSelector transport.threadFactory.clientSelectorThreadSize=1 transport.threadFactory.clientWorkerThreadPrefix=NettyClientWorkerThread transport.threadFactory.bossThreadSize=1 transport.threadFactory.workerThreadSize=default transport.shutdown.wait=3 # RM configuration client.rm.asyncCommitBufferLimit=10000 client.rm.lock.retryInterval=10 client.rm.lock.retryTimes=30 client.rm.lock.retryPolicyBranchRollbackOnConflict=true client.rm.reportRetryCount=5 client.rm.tableMetaCheckEnable=false client.rm.tableMetaCheckerInterval=60000 client.rm.sqlParserType=druid client.rm.reportSuccessEnable=false client.rm.sagaBranchRegisterEnable=false # TM configuration client.tm.commitRetryCount=5 client.tm.rollbackRetryCount=5 client.tm.defaultGlobalTransactionTimeout=60000 client.tm.degradeCheck=false client.tm.degradeCheckAllowTimes=10 client.tm.degradeCheckPeriod=2000 # undo log configuration client.undo.dataValidation=true client.undo.logSerialization=jackson client.undo.onlyCareUpdateColumns=true client.undo.logTable=undo_log client.undo.compress.enable=true client.undo.compress.type=zip client.undo.compress.threshold=64k client.log.exceptionRate=100

3. The micro service reads the nacos configuration

Next, you need to modify the application.yml file of each microservice to let the microservice read the client.properties file in nacos:

seata: config: type: nacos nacos: server-addr: 127.0.0.1:8848 username: nacos password: nacos group: SEATA_GROUP data-id: client.properties

Restart the microservice. Whether the microservice is connected to the SH cluster of tc or the HZ cluster of tc is determined by the client.properties of nacos.

lient.log.exceptionRate=100

## 3. The micro service reads the nacos configuration Next, you need to modify the of each microservice application.yml File for microservice to read nacos Medium client.properties File: ```yaml seata: config: type: nacos nacos: server-addr: 127.0.0.1:8848 username: nacos password: nacos group: SEATA_GROUP data-id: client.properties

Restart the microservice. Whether the microservice is connected to the SH cluster of tc or the HZ cluster of tc is determined by the client.properties of nacos.

10 November 2021, 09:35 | Views: 2634

Add new comment

For adding a comment, please log in
or create account

0 comments