Solution of Mycat distributed database architecture

Echo editor, welcome to reprint, reprint please state the source of the article. Welcome to add echo wechat (wechat: t2421499075) for communication and learning Never lose in a hundred battles, never claim to win, never lose in a hundred defeats, and strive to move forward. ——This is really powerful!!!

preparation:

If all the previous preparations are done, and Mycat is configured to realize read-write separation, you can quickly start Mycat to realize database sub database and sub table. For the implementation of read-write separation in Mycat, please refer to: https://blog.csdn.net/xlecho/article/details/102897050

Because of the separation of reading and writing in our Mycat, we have configured the login user name and password, so we don't need to configure server.mxl to realize the database sub database and sub table. But when we use the sub table rule, we need to involve two new configuration files

  • rule.xml
  • autopartition-long.txt

Configure schema

The biggest difference between the implementation of database sub database sub table and the separation of read and write is the configuration of dataHost. Only one dataHost is needed for the separation of read and write. But if only one dataHost is configured, there is no way to realize multi database reading and writing. We need to realize table splitting, of course, we need to consider that the tables corresponding to each library need to be able to read and write. So when we configure table, each corresponding library needs to correspond to each writable library. At the same time, it is different from read-write separation. Since we need table splitting rules to divide databases and tables, we have added the table splitting rule auto sharding long

<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">

    <schema name="userDB" checkSQLschema="true" sqlMaxLimit="100">
        <table name="user" dataNode="dn1,dn2,dn3" primaryKey="id" rule="auto-sharding-long"/>
    </schema>
    <dataNode name="dn1" dataHost="testNode1" database="test"/>
    <dataNode name="dn2" dataHost="testNode2" database="test"/>
    <dataNode name="dn3" dataHost="testNode3" database="test"/>

    <dataHost name="testNode1" maxCon="1000" minCon="10" balance="1" writeType="0" dbType="mysql" dbDriver="native" switchType="-1" slaveThreshold="100">
        <heartbeat>select user()</heartbeat>
        <writeHost host="hostM1" url="192.168.222.132:3306" user="root" password="123456" />
    </dataHost>

    <dataHost name="testNode2" maxCon="1000" minCon="10" balance="1" writeType="0" dbType="mysql" dbDriver="native" switchType="-1" slaveThreshold="100">
        <heartbeat>select user()</heartbeat>
        <writeHost host="hostM2" url="192.168.222.133:3306" user="root" password="123456" />
    </dataHost>

    <dataHost name="testNode3" maxCon="1000" minCon="10" balance="1" writeType="0" dbType="mysql" dbDriver="native" switchType="-1" slaveThreshold="100">
        <heartbeat>select user()</heartbeat>
        <writeHost host="hostM3" url="192.168.222.134:3306" user="root" password="123456" />
    </dataHost>

</mycat:schema>

Configure rule.xml

If we have configured auto sharding long rules in the schema, we need to configure the corresponding rules in rule.xml. (Mycat's original rule configuration file already has the rules we need to configure, so we don't need to change them here, but the autopartition-long.txt document corresponding to auto sharding-long needs to be changed for demonstration.)

<?xml version="1.0" encoding="UTF-8"?>
<!-- - - Licensed under the Apache License, Version 2.0 (the "License");
        - you may not use this file except in compliance with the License. - You
        may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0
        - - Unless required by applicable law or agreed to in writing, software -
        distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT
        WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the
        License for the specific language governing permissions and - limitations
        under the License. -->
<!DOCTYPE mycat:rule SYSTEM "rule.dtd">
<mycat:rule xmlns:mycat="http://io.mycat/">
    <tableRule name="rule1">
        <rule>
            <columns>id</columns>
            <algorithm>func1</algorithm>
        </rule>
    </tableRule>
    <tableRule name="rule2">
        <rule>
            <columns>user_id</columns>
            <algorithm>func1</algorithm>
        </rule>
    </tableRule>

    <tableRule name="sharding-by-intfile">
        <rule>
            <columns>sharding_id</columns>
            <algorithm>hash-int</algorithm>
        </rule>
    </tableRule>
    <tableRule name="auto-sharding-long">
        <rule>
            <columns>id</columns>
            <algorithm>rang-long</algorithm>
        </rule>
    </tableRule>
    <tableRule name="mod-long">
        <rule>
            <columns>id</columns>
            <algorithm>mod-long</algorithm>
        </rule>
    </tableRule>
    <tableRule name="sharding-by-murmur">
        <rule>
            <columns>id</columns>
            <algorithm>murmur</algorithm>
        </rule>
    </tableRule>
    <tableRule name="crc32slot">
        <rule>
            <columns>id</columns>
            <algorithm>crc32slot</algorithm>
        </rule>
    </tableRule>
    <tableRule name="sharding-by-month">
        <rule>
            <columns>create_time</columns>
            <algorithm>partbymonth</algorithm>
        </rule>
    </tableRule>
    <tableRule name="latest-month-calldate">
        <rule>
            <columns>calldate</columns>
            <algorithm>latestMonth</algorithm>
        </rule>
    </tableRule>

    <tableRule name="auto-sharding-rang-mod">
        <rule>
            <columns>id</columns>
            <algorithm>rang-mod</algorithm>
        </rule>
    </tableRule>
    <tableRule name="jch">
        <rule>
            <columns>id</columns>
            <algorithm>jump-consistent-hash</algorithm>
        </rule>
    </tableRule>

    <function name="murmur" class="io.mycat.route.function.PartitionByMurmurHash">
        <property name="seed">0</property><!-- The default is 0. -->
        <property name="count">2</property><!-- The number of database nodes to be partitioned must be specified, otherwise it cannot be partitioned -->
        <property name="virtualBucketTimes">160</property><!-- An actual database node is mapped to so many virtual nodes, which is 160 times by default, that is, the number of virtual nodes is 160 times that of physical nodes -->
        <!-- <property name="weightMapFile">weightMapFile</property> The weight of a node. The default value of a node without a specified weight is 1. with properties Fill in the file format from 0 to count-1 The integral value of is the node cable
//Refer to key and take the node weight value as the value. The ownership multiple value must be a positive integer, otherwise, use 1 instead -- >
        <!-- <property name="bucketMapPath">/etc/mycat/bucketMapPath</property>
                //It is used to observe the distribution of each physical node and virtual node during the test. If this attribute is specified, the mapping between the murmur hash value of the virtual node and the physical node will be output to this file by line. There is no default value. If it is not specified, nothing will be output -- >
    </function>
    <function name="crc32slot" class="io.mycat.route.function.PartitionByCRC32PreSlot">
        <property name="count">2</property><!-- The number of database nodes to be partitioned must be specified, otherwise it cannot be partitioned -->
    </function>
    <function name="hash-int"
        class="io.mycat.route.function.PartitionByFileMap">
        <property name="mapFile">partition-hash-int.txt</property>
    </function>
    <function name="rang-long"
        class="io.mycat.route.function.AutoPartitionByLong">
        <property name="mapFile">autopartition-long.txt</property>
    </function>
    <function name="mod-long" class="io.mycat.route.function.PartitionByMod">
        <!-- how many data nodes -->
        <property name="count">3</property>
    </function>
    <function name="func1" class="io.mycat.route.function.PartitionByLong">
        <property name="partitionCount">8</property>
        <property name="partitionLength">128</property>
    </function>
    <function name="latestMonth" class="io.mycat.route.function.LatestMonthPartion">
        <property name="splitOneDay">24</property>
    </function>
    <function name="partbymonth"
        class="io.mycat.route.function.PartitionByMonth">
        <property name="dateFormat">yyyy-MM-dd</property>
        <property name="sBeginDate">2015-01-01</property>
    </function>

    <function name="rang-mod" class="io.mycat.route.function.PartitionByRangeMod">
        <property name="mapFile">partition-range-mod.txt</property>
    </function>

    <function name="jump-consistent-hash" class="io.mycat.route.function.PartitionByJumpConsistentHash">
        <property name="totalBuckets">3</property>
    </function>
</mycat:rule>

Configure autopartition-long.txt

The original configuration of this configuration file is M, which is too large for calculation. We change the configuration as follows:

0-2000=0 # When the size of the representative id is between 0-2000, the dn1 server is used
2000-4000=1 # The size of the representative id is in the middle of 2000-4000, i.e. using the dn2 server
4000-8000=2 # When the size of the representative id is between 4000-8000, the dn3 server is used

We have three nodes, which is equivalent to one with a server set. Each server corresponds to the subscript, counting from 0, and 0 represents our dn1

Test:

Insert three pieces of data according to our configured rules

INSERT INTO `user`(`id`, `user_name`, `pass_word`) VALUES (1000, 'a', '123456');
INSERT INTO `user`(`id`, `user_name`, `pass_word`) VALUES (3000, 'b', '123456');
INSERT INTO `user`(`id`, `user_name`, `pass_word`) VALUES (6000, 'c', '123456');

After inserting, we connect Mycat to view the data, as shown below:

Here we have 3 pieces of 1000 data because our id is equal to 1000, so we insert them into the dn1 service, but dn1 is the master server 132 of MySQL master-slave replication. Therefore, after inserting the server, the other two slave servers 133 / 134 will directly copy the data.

Verification

Server dn1, there should be three servers with id=1000, id=3000 on our 133 slave server, id=6000 on our 134 server

  • Primary server 132
  • From server 133
  • From server 134

Be a blogger with a bottom line

Tags: Java mycat Database xml MySQL

Posted on Mon, 04 Nov 2019 21:49:27 -0500 by Ausx