Combined with the mysql master-slave deployed by docker in the previous article, this article mainly explains how the SpringBoot project combines sharding JDBC to realize database and table separation and read-write separation.
1, Sharding JDBC introduction
1. The introduction on the official website is quoted here:
positioned as a lightweight Java framework, it provides additional services in the JDBC layer of Java. It uses the client to directly connect to the database and provides services in the form of jar package without additional deployment and dependency. It can be understood as an enhanced jdbc driver and is fully compatible with JDBC and various ORM frameworks.
applicable to any ORM framework based on JDBC, such as JPA, Hibernate, Mybatis, Spring JDBC Template or directly using JDBC.
support any third-party database connection pool, such as DBCP, C3P0, BoneCP, Druid, HikariCP, etc.
support any database that implements JDBC specification. At present, it supports MySQL, Oracle, SQL server, PostgreSQL and any database that complies with the SQL92 standard.
2. Own understanding:
for the enhanced jdbc driver, when the client uses it, just like the normal jdbc driver, it introduces the sharding JDBC dependency package, connects the database, configures the sub database and sub table rules, and separates the reading and writing configuration. Then, for the sql operation of the client, sharding JDBC will automatically complete the sub database and sub table and read and write separation operations according to the configuration.
2, Realization effect
1. The following figure shows the effect of database and table separation and read-write separation through sharding JDBC
sub database and sub table: combined with the master-slave database in the previous article, we use the master-slave database built last time. App1 of 3307 is the master database and app1 of 3308 is the corresponding slave database. At the same time, we create app2 library and user2 table in 3307. The app2 library here needs to be the same as the app1 library. The user2 table has the same structure as the user1 table. The master and slave will automatically help us create tables and synchronize them to 3308. Then we use sharding JDBC to configure the database and table splitting strategy of response in the project, so that when inserting data, the data will be entered into the corresponding library and table according to the fragmentation rules of the configuration field. Here, we mainly determine whether the data enters the app1 library or the app2 Library of 3307 according to the fragmentation rules of the sub database, and then determine whether to enter the user1 table or the user2 table according to the fragmentation rules of the sub table.
read write separation: read write separation mainly refers to the 3308 data source of our project DQL according to the master slave rule configured by sharding JDBC, and the 3307 data source of our project DML according to the master slave rule
3, The spring boot project integrates sharding JDBC to realize database and table separation and read-write separation
1. Here, create a maven project. First, introduce dependencies. The pom.xml file is as follows.
<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.3.3.RELEASE</version> </parent> <groupId>com.cgg</groupId> <artifactId>sharding-jdbc-test</artifactId> <version>1.0-SNAPSHOT</version> <properties> <java.version>1.8</java.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> </dependency> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>8.0.15</version> </dependency> <dependency> <groupId>com.alibaba</groupId> <artifactId>druid</artifactId> <version>1.1.21</version> </dependency> <dependency> <groupId>com.baomidou</groupId> <artifactId>mybatis-plus-boot-starter</artifactId> <version>3.1.1</version> </dependency> <dependency> <groupId>com.baomidou</groupId> <artifactId>mybatis-plus-extension</artifactId> <version>3.1.1</version> </dependency> <dependency> <groupId>org.apache.shardingsphere</groupId> <artifactId>sharding-jdbc-spring-boot-starter</artifactId> <version>4.0.0-RC1</version> </dependency> <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project>
Note: sharding JDBC of 4.0 is used here, and the spring boot version is 2.x. many problems are encountered in the integration process, and there will be wrong solutions later.
2. The application.yml file is as follows
spring: jpa: properties: hibernate: hbm2ddl: auto: create dialect: org.hibernate.dialect.MySQL5Dialect show_sql: true shardingsphere: props: sql: show: true datasource: names: master0,master0slave0,master1,master1slave0 master0: type: com.alibaba.druid.pool.DruidDataSource driver-class-name: com.mysql.cj.jdbc.Driver url: jdbc:mysql://127.0.0.1:3307/app1?useUnicode=true&characterEncoding=utf8&tinyInt1isBit=false&useSSL=false&serverTimezone=GMT username: root password: 654321 master1: type: com.alibaba.druid.pool.DruidDataSource driver-class-name: com.mysql.cj.jdbc.Driver url: jdbc:mysql://127.0.0.1:3307/app2?useUnicode=true&characterEncoding=utf8&tinyInt1isBit=false&useSSL=false&serverTimezone=GMT username: root password: 654321 master0slave0: type: com.alibaba.druid.pool.DruidDataSource driver-class-name: com.mysql.cj.jdbc.Driver url: jdbc:mysql://127.0.0.1:3308/app1?useUnicode=true&characterEncoding=UTF-8&allowMultiQueries=true&useSSL=false&serverTimezone=GMT username: root password: 654321 master1slave0: type: com.alibaba.druid.pool.DruidDataSource driver-class-name: com.mysql.cj.jdbc.Driver url: jdbc:mysql://127.0.0.1:3308/app2?useUnicode=true&characterEncoding=UTF-8&allowMultiQueries=true&useSSL=false&serverTimezone=GMT username: root password: 654321 sharding: default-database-strategy: inline: sharding-column: id algorithm-expression: app$->{(id % 2)+1} tables: user: actual-data-nodes: app$->{1..2}.user$->{1..2} table-strategy: inline: sharding-column: id algorithm-expression: user$->{((""+id)[2..10].toInteger() % 2)+1} key-generator: column: id type: SNOWFLAKE master-slave-rules: app1: master-data-source-name: master0 slave-data-source-names: master0slave0 app2: master-data-source-name: master1 slave-data-source-names: master1slave0 sharding: jdbc: config: masterslave: load-balance-algorithm-type: random
3. application.properties file
spring.main.allow-bean-definition-overriding=true mybatis-plus.mapper-locations= classpath:/mapper/*.xml mybatis-plus.configuration.log-impl= org.apache.ibatis.logging.stdout.StdOutImpl
4. Implementation of sub database and sub table
4.1. Let's talk about the data source first. Combined with the previous mysql master-slave articles, my local 127.0.0.1:3307 port is the master and 127.0.0.1:3308 port is the slave.
two libraries app1 and app2 are established under 3307. At the same time, two tables user1 and user2 are established in each library to complete database and table splitting.
the following is the SQL statement of app1 Library:
SET NAMES utf8mb4; SET FOREIGN_KEY_CHECKS = 0; -- ---------------------------- -- Table structure for user1 -- ---------------------------- DROP TABLE IF EXISTS `user1`; CREATE TABLE `user1` ( `id` bigint(11) NOT NULL COMMENT 'Primary key id', `name` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL, PRIMARY KEY (`id`) USING BTREE ) ENGINE = InnoDB CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = Dynamic; -- ---------------------------- -- Table structure for user2 -- ---------------------------- DROP TABLE IF EXISTS `user2`; CREATE TABLE `user2` ( `id` bigint(11) NOT NULL COMMENT 'Primary key id', `name` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL, PRIMARY KEY (`id`) USING BTREE ) ENGINE = InnoDB CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = DYNAMIC; SET FOREIGN_KEY_CHECKS = 1;
the following is the SQL statement of app2 Library:
SET NAMES utf8mb4; SET FOREIGN_KEY_CHECKS = 0; -- ---------------------------- -- Table structure for user1 -- ---------------------------- DROP TABLE IF EXISTS `user1`; CREATE TABLE `user1` ( `id` bigint(11) NOT NULL COMMENT 'Primary key id', `name` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL, PRIMARY KEY (`id`) USING BTREE ) ENGINE = InnoDB CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = DYNAMIC; -- ---------------------------- -- Table structure for user2 -- ---------------------------- DROP TABLE IF EXISTS `user2`; CREATE TABLE `user2` ( `id` bigint(11) NOT NULL COMMENT 'Primary key id', `name` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL, PRIMARY KEY (`id`) USING BTREE ) ENGINE = InnoDB CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = Dynamic; SET FOREIGN_KEY_CHECKS = 1;
4.2. Here we explain the configured sub database and sub table rules to insert data into app1 and app2 libraries, user1 and user2 tables
sharding: default-database-strategy: inline: sharding-column: id #The fragmented field is the id primary key algorithm-expression: app$->{(id % 2)+1} #The partition algorithm is to find the remainder of id to 2 and then add 1 tables: user: actual-data-nodes: app$->{1..2}.user$->{1..2} #The actual data node is (app1/app2).(user1/user2) table-strategy: inline: sharding-column: id #The partition field of the partition table is the primary key id algorithm-expression: user$->{((""+id)[2..10].toInteger() % 2)+1} #The algorithm of dividing the table is to take 2-10 bits of id, find the remainder of 2, and then add 1 key-generator: column: id # Auto generate primary key type: SNOWFLAKE # The rule for generating the primary key is the snowflake algorithm
the rules configured above mean that when there is data to be inserted into the database or queried, sharding JDBC calculates the value of the field id configured in pieces according to the configured algorithm to obtain the result. For example, according to the above sub database rules, get the id value and add 1 to 2. No matter how the id changes, the values returned by the algorithm will always be 1 and 2, that is, APP $- > {(id% 2) + 1} The corresponding libraries are app1 and app2. The rules for splitting tables are the same.
note: here is just a simple sharding rule configured to demonstrate how sharding JDBC completes database and table segmentation. We can also use code rewriting
Method to implement a more complex fragmentation strategy. Finally, the {} of $- > {(ID% 2) + 1} here is actually an expression of Groovy syntax, and sharding JDBC parses the fragmentation strategy through Groovy syntax sugar. Therefore, if you want to configure more complex policies, it is recommended to learn Groovy syntax.
4.3 next, we introduce the configured read-write separation rules and how to realize read-write separation
master-slave-rules: app1: #Partition app1 master-data-source-name: master0 #Primary data source for partition app1 slave-data-source-names: master0slave0 #Slave data source of partition app1 app2: #Partition app2 master-data-source-name: master1 #Primary data source of partition app2 slave-data-source-names: master1slave0 #Slave data source of partition app2
the above read-write separation rules refer to the master-slave data source of partition app1 and the master-slave data source of partition app2. Why are the partitions here app1 and app2? Here's an explanation. When I configured it myself, I failed several times. At first, I referred to the official website manual for configuration, thinking that the partition name can be customized, so ds0 and ds1 were configured, but an error was reported when the project was started. The error message is:
Cannot find data source in sharding rule, invalid actual data node is: 'app1.user1'
At first, I thought there was a problem with the sharding JDBC version, but there was still a problem with the version, so I began to debug the source code:
it is obvious from the screenshot above that this is to judge whether the partition set we configured, that is, ds0 and ds1, contains the data source name of the actual node, that is, the database name. Therefore, the partition name here is related to the database name of the partition policy configured above.
4.4 verification
next, we verify the actual effect. Here's the code for unit testing.
/** * @author cgg * @version 1.0.0 * @date 2021/10/25 */ @SpringBootTest(classes = ShardingJdbcApp.class) @RunWith(SpringRunner.class) public class AppTest { @Resource private IUserService userService; /** * Test sharding JDBC add data */ @Test public void testShardingJdbcInsert() { userService.InsertUser(); } /** * Test sharding JDBC query data */ @Test public void testShardingJdbcQuery() { //Query all userService.queryUserList(); //Query according to specified criteria userService.queryUserById(1452619866473324545L); } } /** * @author cgg * @version 1.0.0 * @date 2021/10/25 */ @Service @Slf4j public class UserServiceImpl implements IUserService { @Resource private UserMapper userMapper; @Resource private DataSource dataSource; @Override public List<User> queryUserList() { List<User> userList = userMapper.queryUserList(); userList.forEach(user -> System.out.println(user.toString())); return userList; } @Override public User queryUserById(Long id) { User user = userMapper.selectOne(Wrappers.<User>lambdaQuery().eq(User::getId, id)); System.out.println(user.toString()); return user; } @Override public void InsertUser() { for (int i = 20; i < 40; i++) { User user = new User(); user.setName("XX-" + i); int count = userMapper.insert(user); System.out.println(count); } } }
4.4.1. First look at all query results
4.4.2. Look at the results of a single query
4.4.3. Look at the new results (actually inserted into the user1 table of app1 Library of the master data source, and each subsequent insertion is the master data source without slave operation)
4, Problems and summary
here, there is no problem with the initial basic use of sharding JDBC. In addition to the simplest use, we also need to consider how to deal with transactions after database and table separation, that is, distributed transactions, data inconsistency and synchronization delay after read-write separation. These require us to learn from the concept theory, and then consider the scheme in combination with the actual business, The following article will focus on the use of sharding proxy. Another blog about distributed transactions and master-slave data latency.