Build Mongodb cluster in detail under centos7

1, Mongodb cluster (no SQL Library) cluster construction

1. For the topology of Mongo cluster, let's have a brief understanding

2. Install mongodb to node1, 2, and 3 nodes (the following operations should be performed on node1, 2, and 3 nodes, xshell is recommended)
(1) Configure mongo's yum source
Command: vi /etc/yum.repos.d/mongodb-org.repo

Press i to enter editing mode, paste the following code, and then save to exit

[mongodb-org]
name=MongoDB Repository
baseurl=http://mirrors.aliyun.com/mongodb/yum/redhat/7Server/mongodb-org/3.2/x86_64/
gpgcheck=0
enabled=1


(2) Enter the etc/sysconfig/selinux directory and close SELinux (optional)

Command: vi /etc/sysconfig/selinux
What does selinux add?
Security enhanced Linux (SELinux) is a Linux kernel module and a security subsystem of Linux.

Change SELINUX to disabled

SELINUX=disabled

Note: after modification, restart is required
Command: reboot

(3) After restart, you can choose to update the source for a long time
Then install mongodb online
Update source command: yum update

The command to install mongodb online: Yum install - y mongodb ORG

(4) Modify configuration file command: vi /etc/mongod.conf
Comment out bindIp or modify it to an ip address of the current machine. Function: enable other machines to connect to mongodb in virtual machine normally

3. Create a mongo directory in node1, node2 and node3 (the following operations should be performed on node1, node2 and node3)

(1) Create a mongo directory at node1,node2,node3
Command: mkdir /opt/mongo

(2) Create mongos, config, shard1, shard2 and shard3 under the mongo folder. The commands are as follows:

mkdir /opt/mongo/mongos
mkdir /opt/mongo/config
mkdir /opt/mongo/shard1
mkdir /opt/mongo/shard2
mkdir /opt/mongo/shard3

(3) Three subdirectories corresponding to the five folders are used to store data, log and run

mkdir /opt/mongo/mongos/data
mkdir /opt/mongo/config/data
mkdir /opt/mongo/shard1/data
mkdir /opt/mongo/shard2/data
mkdir /opt/mongo/shard3/data

mkdir /opt/mongo/mongos/log
mkdir /opt/mongo/config/log
mkdir /opt/mongo/shard1/log
mkdir /opt/mongo/shard2/log
mkdir /opt/mongo/shard3/log

mkdir /opt/mongo/mongos/run 
mkdir /opt/mongo/config/run 
mkdir /opt/mongo/shard1/run 
mkdir /opt/mongo/shard2/run 
mkdir /opt/mongo/shard3/run 

4. Create a config server in node1, node2 and node3 (the following operations are performed on node1, node2, and node3)

(1) Create the corresponding directory of mongo config server on multiple machines
Command: mkdir -p /opt/mongo/config/{log,data,run}

(2) Modify the configuration file of config server on multiple machines, paste the following code and save it
Command: vi /opt/mongo/config/mongod.conf

systemLog:
  destination: file
  logAppend: true
  path: /opt/mongo/config/log/mongod.log
storage:
  dbPath: /opt/mongo/config/data
  journal:
    enabled: true
processManagement:
  fork: true
  pidFilePath: /opt/mongo/config/run/mongod.pid
net:
  port: 24000
replication:
  replSetName: config
sharding:
  clusterRole: configsvr

(3) Start all mongo config server services
Command: mongod --config /opt/mongo/config/mongod.conf

If the following problems are encountered during the execution of the command (the solution is as follows)

Set the variables manually, which requires adding a line to the environment variable configuration file.

Command: vi /etc/profile

export LC_ALL=C

Refresh environment variables, command: source /etc/profile
Then execute mongod --config /opt/mongo/config/mongod.conf Just command

5. Test and log in to the config server of one of node1, 2 and 3 to create the configuration and activate it
Take logging in to mongo config server in node1 as an example:
Command: mongo --port 24000

6. Run configuration (it can be executed on one node here)
(1) Execute the following command in the node you just logged in
Note: in the members array, change it to the ip of its own node_ id: config must be consistent with replSetName: config in the previous config server configuration file

config = {
   _id : "config",
    members : [
        {_id : 0, host : "192.168.28.201:24000" },
        {_id : 1, host : "192.168.28.202:24000" },
        {_id : 2, host : "192.168.28.203:24000" }
    ]
}


(2) Initialize replica set configuration
Command: rs.initiate(config)


(3) View partition status
Command: rs.status()

7. Create the first partition and replica set for node1, 2 and 3 (the following operations should be performed on node1, 2 and 3 nodes)
(1) Modify the configuration file of mongo shard1 server
Command: mkdir -p /opt/mongo/shard1/{log,data,run}

(2) Modify the shard1 server configuration file on multiple machines, paste the following configuration and save it
Command: vi /opt/mongo/shard1/mongod.conf

systemLog:
  destination: file
  logAppend: true
  path: /opt/mongo/shard1/log/mongod.log
storage:
  dbPath: /opt/mongo/shard1/data
  journal:
    enabled: true
processManagement:
  fork: true
  pidFilePath: /opt/mongo/shard1/run/mongod.pid
net:
  port: 25001
replication:
  replSetName: shard1
sharding:
  clusterRole: shardsvr

(3) Start all shard1 server s
Command: mongod --config /opt/mongo/shard1/mongod.conf

(4) Log in to any shard1 server (you can log in to that machine if you want it to be the master. Here is the master node to set node1 as shard1), and initialize the replica set (the following is executed on a node)
Command: mongo --port 25001

(5) Using the admin database on the node logged in in (4)
Command: use admin

(6) To define the replica set configuration, perform the following
Note: the ip address of the member array is changed to its own node. The shard1 name is the same as the above. The order of each node in the replica set is the first node is the primary node

config = {
   _id : "shard1",
    members : [
        {_id : 0, host : "192.168.28.201:25001" },
        {_id : 1, host : "192.168.28.202:25001" },
        {_id : 2, host : "192.168.28.203:25001" }
    ]
}

(7) Initialize replica set configuration
Command: rs.initiate(config)

(8) View partition status
Command: rs.status()

8. Create a second partition and replica set for node1, 2, and 3 (the following operations should be performed on node1, 2, and 3 nodes)
(1) Modify the configuration file of mongo shard2 server
Command: mkdir -p /opt/mongo/shard2/{log,data,run}

(2) Modify the shard2 server configuration files on multiple machines, paste the following contents and save them
Command: vi /opt/mongo/shard2/mongod.conf

systemLog:
  destination: file
  logAppend: true
  path: /opt/mongo/shard2/log/mongod.log
storage:
  dbPath: /opt/mongo/shard2/data
  journal:
    enabled: true
processManagement:
  fork: true
  pidFilePath: /opt/mongo/shard2/run/mongod.pid
net:
  port: 25002
replication:
  replSetName: shard2
sharding:
  clusterRole: shardsvr

(3) Start all shard2 server s
Command: mongod --config /opt/mongo/shard2/mongod.conf

(4) Log in to any shard2 server (you can log in to the machine you want to be the master. Here is the master node to set node2 as shard2), and initialize the replica set (the following operations are performed on a single node)
Command: mongo --port 25002

(5) Using the admin database
Command: use admin

(6) Define the replica set configuration and change it to the ip of your node

config = {
   _id : "shard2",
    members : [
        {_id : 0, host : "192.168.28.202:25002" },
        {_id : 1, host : "192.168.28.201:25002" },
        {_id : 2, host : "192.168.28.203:25002" }
    ]
}

(7) Initialize replica set configuration
Command: rs.initiate(config)

(8) View partition status
Command: rs.status()

9. Create a second partition and replica set for node1, 2, and 3 (the following operations should be performed on node1, 2, and 3 nodes)
(1) Modify the configuration file of mongo shard3 server
Command: mkdir -p /opt/mongo/shard3/{log,data,run}

(2) Modify the configuration file of shard3 server on multiple machines, paste the following content and save it
Command: vi /opt/mongo/shard3/mongod.conf

systemLog:
  destination: file
  logAppend: true
  path: /opt/mongo/shard3/log/mongod.log
storage:
  dbPath: /opt/mongo/shard3/data
  journal:
    enabled: true
processManagement:
  fork: true
  pidFilePath: /opt/mongo/shard3/run/mongod.pid
net:
  port: 25003
replication:
  replSetName: shard3
sharding:
  clusterRole: shardsvr

(3) Start all shard3 server s
Command: mongod --config /opt/mongo/shard3/mongod.conf

(4) Log in to any shard3 server (log on to the machine that you want to be the master. Here is the master node for setting node3 to shard3), and initialize the replica set (the following operations are performed on a single node)
Command: mongo --port 25003

(5) Using the admin database
Command: use admin

(6) Define the replica set configuration and change it to the ip of your node

config = {
   _id : "shard3",
    members : [
        {_id : 0, host : "192.168.28.203:25003" },
        {_id : 1, host : "192.168.28.201:25003" },
        {_id : 2, host : "192.168.28.202:25003" }
    ]
}

(7) Initialize replica set configuration
Command: rs.initiate(config)

(8) View partition status
Command: rs.status()

10. Install and configure the mongos process and create directories for node1, 2, and 3 (the following operations should be performed on node1, 2, and 3 nodes)
(1) Create the directory where the mongos process is located
Command: mkdir -p /opt/mongo/mongos/{log,data,run}

(2) Add mongs's configuration file, paste the following and save (note to change to your own ip)
Command: vi /opt/mongo/mongos/mongod.conf

systemLog:
  destination: file
  logAppend: true
  path: /opt/mongo/mongos/log/mongod.log
processManagement:
  fork: true
  pidFilePath: /opt/mongo/mongos/run/mongod.pid
net:
  port: 23000
sharding:
  configDB: config/192.168.28.201:24000,192.168.28.202:24000,192.168.28.203:24000

(3) Start all routing servers
Command: mongos --config /opt/mongo/mongos/mongod.conf

(4) Log in to one of the routing nodes and manually enable fragmentation (select a node for the following operations, I use node3)
Command: mongo --port 23000

(5) Add partition to mongos (pay attention to modify the IP of your node)

sh.addShard("shard1/192.168.28.201:25001,192.168.28.202:25001,192.168.28.203:25001")
sh.addShard("shard2/192.168.28.202:25002,192.168.28.201:25002,192.168.28.203.25002")
sh.addShard("shard3/192.168.28.203:25003,192.168.28.201:25003,192.168.28.202:25003")

11. Set the slave to be readable (effective once in the command line). If the configuration is readable from receiving, it is specified by the connection client
Command (execute once in node3) rs.slaveOk()

12. Connect mongos client with node1, node2 and node3 to operate mongo cluster, and create library and open partition function in it
(1) Create Library
Command: use mybike
(2) Create bikes collection
Command: db.createCollection("bikes")
(3) Switch to admin library, and specify a library to enable sharding
Command: use admin
(4) enable the partition function for the mybike database
Command: db.runCommand({"enablesharding":"mybike"})
(5) The bikes set in mybike database is partitioned according to the hash of id
Command: db.runCommand({"shardcollection":"mybike.bikes","key":{_id:'hashed'}})
(6) Switch back to mybike library again
Command: use mybike
(7) Switch to mybike library and insert data into bikes collection

db.bikes.insert(   {"status": 1, "loc": [28.189153,112.960318],"qrcode":""}   )
db.bikes.insert(   { "status": 1, "loc": [28.189155,112.960318],"qrcode":""}   )
db.bikes.insert(   {"status": 1, "loc": [28.189159,112.960318],"qrcode":""}   )
db.bikes.insert(   {"status": 1, "loc": [28.189163,112.960318],"qrcode":""}   )
db.bikes.insert(   { "status": 1, "loc": [28.189170,112.960318],"qrcode":""}   )
db.bikes.insert(   {"status": 1, "loc": [28.189393,112.943868],"qrcode":""}   )
db.bikes.insert(   { "status": 1, "loc": [28.197871,112.957641],"qrcode":""}   )
db.bikes.insert(   {"status": 1, "loc": [28.201437,112.960336],"qrcode":""}   )
db.bikes.insert(   {"status": 1, "loc": [28.201487,112.960336],"qrcode":""}   )
db.bikes.insert(   {"status": 1, "loc": [28.20392,112.958953],"qrcode":""}   )
db.bikes.insert(  {"status" : 1, "loc" : [ 28.204381, 112.959887 ], "qrcode" : ""} )
db.bikes.insert(  {  "status" : 1, "loc" : [ 28.204391, 112.959885 ], "qrcode" : ""   } )
db.bikes.insert(  {  "status" : 1, "loc" : [ 28.204481, 112.959789 ], "qrcode" : ""   } )
db.bikes.insert(  {   "status" : 1, "loc" : [ 28.204181, 112.959671 ], "qrcode" : ""  } )
db.bikes.insert(  {  "status" : 1, "loc" : [ 28.204881, 112.959556 ], "qrcode" : ""   } )
db.bikes.insert(  { "status" : 1, "loc" : [ 28.204681, 112.959874 ], "qrcode" : ""    } )
db.bikes.insert(  { "status" : 1, "loc" : [ 28.204201, 112.959867 ], "qrcode" : ""    } )
db.bikes.insert(  {   "status" : 1, "loc" : [ 28.2043941, 112.959854 ], "qrcode" : ""  } )

(8) In mongos process, the query result is the result that all fragments must satisfy the condition
Login command: mongo --port 23000
View all libraries: show dbs
Use mybike Library: use mybike
View the collections in mybike Library: show collections
Query the data in the collection: db.bikes.find()

Tags: QRCode MongoDB SELinux yum

Posted on Mon, 29 Jun 2020 03:12:11 -0400 by madhukar_garg