Copy Set of MongoDB

Overview of Replication Set Functions

The replica set is a group of MongoD processes used by MongoDB to maintain the same data set. The replica set provides the basis for all production deployments: data redundancy and high availability. The high availability of MongoDB is achieved by automatic failover. This section describes the implementation of this part of MongoDB.

Working Principle of Copy Set

Although Journaling log function provides data recovery function, it is usually for a single node, while replication set is for a group of processes, usually composed of multiple nodes. Journaling log guarantees data integrity on each node, and realizes automatic failover in the whole replication set, thus ensuring high availability of the database.
In a production environment, a replication set should contain at least three nodes, an arbiter, a primary and one or more secondary nodes.
The primary node receives all write operations, and a replica set has and only one primary that can write concerns (which will be described later), and the primary node records all changes in its oplog to the dataset data sets. Typical structures are as follows:

Secondary node backs up the data on the primary node. Secondary node can have more than one. Once the primary node is not available, abiter will select one from the secondary node as the primary node. Secondary node serves as follows:

Now, besides primary and secondary nodes, you can add a mongod instance replica set as arbiter, which can not maintain the data set. The main function of arbiter is to maintain and replicate the heartbeat of all other nodes in the set to ensure the number of nodes needed for elections. Because arbiter is not a data storage set, arbiter can provide a cheaper way to obtain quorum than full-featured replica set. If the replication set is an even number of nodes, primary can get most votes by adding arbiter nodes. Arbiter does not require special hardware support. The role of arbiter is as follows:

The arbiter arbitrator is always an arbiter, as opposed to the fact that the primary and secondary nodes may swap roles in an election (the primary node fails to trigger).

The failover process is as follows:

Creation of replication sets:

Run the CMD command in the folder D: MongoDB Server 3.2 bin to start the three processes of mongod.
Namely:

mongod --dbpath=D:\MongoDB\Server\3.2\data\rs0_0 --logpath=D:\MongoDB\Server\3.2\logs\rs0_0.log --port=40000 --replSet=rs0
mongod --dbpath=D:\MongoDB\Server\3.2\data\rs0_1 --logpath=D:\MongoDB\Server\3.2\logs\rs0_1.log --port=40001 --replSet=rs0
mongod --dbpath=D:\MongoDB\Server\3.2\data\rs0_2 --logpath=D:\MongoDB\Server\3.2\logs\rs0_2.log --port=40002 --replSet=rs0

Note: You need to create subfolders under the data folder as rs0_0,rs0_1,rs0_2

Then execute and start a client, mongo, execute:

mongo --port 40000

Execute the Initialization Copy Set command:

> rs.initiate()

The results are as follows:

{
"info2" : "no configuration specified. Using a default configuration for
the set",
"me" : "linfl-PC:40000",
"ok" : 1
}

Check out:

rs0:OTHER> rs.conf()
{
"_id" : "rs0",
"version" : 1,
"protocolVersion" : NumberLong(1),
"members" : [
{
"_id" : 0,
"host" : "linfl-PC:40000",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {

},
"slaveDelay" : NumberLong(0),
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatIntervalMillis" : 2000,
"heartbeatTimeoutSecs" : 10,
"electionTimeoutMillis" : 10000,
"getLastErrorModes" : {

},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
},
"replicaSetId" : ObjectId("58a2e2f5c2e580f7b1c85b18")
}
}

Two nodes are added:

rs0:PRIMARY> rs.add("linfl-PC:40001")
{ "ok" : 1 }
rs0:PRIMARY> rs.addArb("linfl-PC:40002")
{ "ok" : 1 }

Note: At this point, the prefix of the command line has changed: rs0:PRIMARY

Observe the status information of the replication set:

rs0:PRIMARY> rs.status()

The following output was found:

{
"set" : "rs0",//Copy Set Name
"date" : ISODate("2017-02-14T11:00:36.634Z"),
"myState" : 1,//1: primary;2: secondary;
"term" : NumberLong(1),
"heartbeatIntervalMillis" : NumberLong(2000),
"members" : [
{
"_id" : 0,
"name" : "linfl-PC:40000",
"health" : 1,//1: Running; 0: Failure
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 216,//Members'online time (seconds)
"optime" : {
"ts" : Timestamp(1487070006, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2017-02-14T11:00:06Z"),
"infoMessage" : "could not find member to sync from",
"electionTime" : Timestamp(1487069941, 2),
"electionDate" : ISODate("2017-02-14T10:59:01Z"),
"configVersion" : 3,
"self" : true
},
{
"_id" : 1,
"name" : "linfl-PC:40001",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 40,
"optime" : {
"ts" : Timestamp(1487070006, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2017-02-14T11:00:06Z"),
"lastHeartbeat" : ISODate("2017-02-14T11:00:36.075Z"),
"lastHeartbeatRecv" : ISODate("2017-02-14T11:00:35.082Z"
),
"pingMs" : NumberLong(0),//Route-trip time between remote members and this example
"syncingTo" : "linfl-PC:40000",//Data synchronization instance source
"configVersion" : 3
},
{
"_id" : 2,
"name" : "linfl-PC:40002",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 5,
"lastHeartbeat" : ISODate("2017-02-15T02:01:11.170Z"),
"lastHeartbeatRecv" : ISODate("2017-02-15T02:01:10.172Z"
),
"pingMs" : NumberLong(0),
"syncingTo" : "linfl-PC:40001",
"configVersion" : 5
}
],
"ok" : 1
}

Since the arbiter instance does not synchronize data, it only selects a new primary from the remaining secondary nodes of the replication set when the primary node fails, and only arbitrates, so the machine running the arbiter instance does not need much storage space.

Data synchronization

Now show the data synchronization process by command, and explain the data synchronization process.
First, let's look at all the databases in this replica set:

rs0:PRIMARY> show dbs
local 0.000GB

There is only one local library. Look at the collection in the local library:

rs0:PRIMARY> use local
switched to db local
rs0:PRIMARY> show collections
me
oplog.rs
replset.election
startup_log
system.replset

Note: MongoDB implements data synchronization between replication sets through oplog.rs.
We look at oplog.rs changes by inserting a record into the cms database:

rs0:PRIMARY> use cms
switched to db cms
rs0:PRIMARY> db.customers.insert({id:11,name:'lisi',orders:[{orders_id:1,create_time:'2017-02-06',products:[{product_name:'MiPad',price:'$100.00'},{product_name:'iphone',price:'$399.00'}]}],mobile:'13161020110',address:{city:'beijing',street:'taiyanggong'}})
WriteResult({ "nInserted" : 1 })
rs0:PRIMARY> db.customers.find()
{ "_id" : ObjectId("58a3bb2ca0bd576baa4763de"), "id" : 11, "name" : "lisi", "orders" : [ { "orders_id" : 1, "create_time" : "2017-02-06", "products" : [ { "product_name" : "MiPad", "price" : "$100.00" }, { "product_name" : "iphone", "price" : "$399.00" } ] } ], "mobile" : "13161020110", "address" : { "city" :"beijing", "street" : "taiyanggong" } }
rs0:PRIMARY> use local
switched to db local
rs0:PRIMARY> db.oplog.rs.find()
{ "ts" : Timestamp(1487069941, 1), "h" : NumberLong("-6355743292210446009"), "v" : 2, "op" : "n", "ns" : "", "o" : { "msg" : "initiating set" } }
{ "ts" : Timestamp(1487069942, 1), "t" : NumberLong(1), "h" : NumberLong("-1263029456710822127"), "v" : 2, "op" : "n", "ns" : "", "o" : { "msg" : "new primary"} }
{ "ts" : Timestamp(1487069995, 1), "t" : NumberLong(1), "h" : NumberLong("6502719191955655967"), "v" : 2, "op" : "n", "ns" : "", "o" : { "msg" : "Reconfig set","version" : 2 } }
{ "ts" : Timestamp(1487070006, 1), "t" : NumberLong(1), "h" : NumberLong("-2415405716599170931"), "v" : 2, "op" : "n", "ns" : "", "o" : { "msg" : "Reconfig set", "version" : 3 } }
{ "ts" : Timestamp(1487124022, 1), "t" : NumberLong(1), "h" : NumberLong("-478589502849657245"), "v" : 2, "op" : "n", "ns" : "", "o" : { "msg" : "Reconfig set", "version" : 4 } }
{ "ts" : Timestamp(1487124063, 1), "t" : NumberLong(1), "h" : NumberLong("653734030548343548"), "v" : 2, "op" : "n", "ns" : "", "o" : { "msg" : "Reconfig set","version" : 5 } }
{ "ts" : Timestamp(1487125292, 1), "t" : NumberLong(1), "h" : NumberLong("4089071333042150540"), "v" : 2, "op" : "c", "ns" : "cms.$cmd", "o" : { "create" :"customers" } }
{ "ts" : Timestamp(1487125292, 2), "t" : NumberLong(1), "h" : NumberLong("-682469243777763072"), "v" : 2, "op" : "i", "ns" : "cms.customers", "o" : { "_id" : ObjectId("58a3bb2ca0bd576baa4763de"), "id" : 11, "name" : "lisi", "orders" : [ { "orders_id" : 1, "create_time" : "2017-02-06", "products" : [ { "product_name" :"MiPad", "price" : "$100.00" }, { "product_name" : "iphone", "price" : "$399.00" } ] } ], "mobile" : "13161020110", "address" : { "city" : "beijing", "street" : "taiyanggong" } } }
rs0:PRIMARY>

We found that oplog.rs already has a record we just created.

{ "ts" : Timestamp(1487125292, 2), "t" : NumberLong(1), "h" : NumberLong("-682469243777763072"), "v" : 2, "op" : "i", "ns" : "cms.customers", "o" : { "_id" : ObjectId("58a3bb2ca0bd576baa4763de"), "id" : 11, "name" : "lisi", "orders" : [ { "orders_id" : 1, "create_time" : "2017-02-06", "products" : [ { "product_name" :"MiPad", "price" : "$100.00" }, { "product_name" : "iphone", "price" : "$399.00" } ] } ], "mobile" : "13161020110", "address" : { "city" : "beijing", "street" : "taiyanggong" } } }

The op parameter represents the operation code: i represents the insert operation; ns represents the namespace where the operation occurs, and o is the object contained in the operation.

When the primary node completes the insertion operation, the secondary node also completes some actions to ensure data synchronization. All the secondary nodes will check whether the oplog.rs on their local database has been modified, find out the timestamp of the latest record, and then the secondary node will query the oplog.rs set on the primary node as a condition, and find out all the heavy rain. At last, the secondary node runs these records to its oplog. RS collection, performs the operations represented by these records, and then completes data synchronization.

View the secondary node database information at this time:

D:\MongoDB\Server\3.2\bin>mongo --port 40001
2017-02-15T10:39:36.688+0800 I CONTROL [main] Hotfix KB2731284 or later update is not installed, will zero
MongoDB shell version: 3.2.9
connecting to: 127.0.0.1:40001/test
rs0:SECONDARY> show dbs
2017-02-15T10:40:07.204+0800 E QUERY [thread1] Error: listDatabases failed:{ "ok" : 0, "errmsg" : "not master and slaveOk=false", "code" : 13435 } :
_getErrorWithCode@src/mongo/shell/utils.js:25:13
Mongo.prototype.getDBs@src/mongo/shell/mongo.js:62:1
shellHelper.show@src/mongo/shell/utils.js:761:19
shellHelper@src/mongo/shell/utils.js:651:15
@(shellhelp2):1:1

rs0:SECONDARY> rs.slaveOk()//Note: Normally secondary is not allowed to read or write. Make changes here
rs0:SECONDARY> show dbs
cms 0.000GB
local 0.000GB

Also note that the size of oplog.rs is fixed. The default size of 32-bit system is 50MB and that of 64-bit system is 5% of the free disk space, which can be set at startup through oplogSize.

Failover

MongoDB automatic failover relies on heartbeat packages, which are in the last Heartbeat field mentioned earlier. Mongod sends a heartbeat packet to other members every two seconds and judges the member status by the "headth" of the members returned by rs.status(). If the primary node in the replication set is unavailable, all the secondary nodes in the replication set will trigger an election operation and elect a new primary node. The arbiter is only responsible for electing other members to be primary nodes, and he will not elect them himself. Participate in the election. If there are more than one secondary node, the node with the latest timestamp oplog record or higher privileges will be selected as primary. If the secondary node fails, there will be no re-election of the primary process.
Now we simulate the process of viewing data in two cases: the second node down and the primary node down.

secondary node down to see the failover situation:

rs0:PRIMARY> rs.status()
{
"set" : "rs0",
"date" : ISODate("2017-02-24T07:39:17.573Z"),
"myState" : 1,
"term" : NumberLong(2),
"heartbeatIntervalMillis" : NumberLong(2000),
"members" : [
{
"_id" : 0,
"name" : "linfl-PC:40000",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 295,
"optime" : {
"ts" : Timestamp(1487921807, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2017-02-24T07:36:47Z"),
"electionTime" : Timestamp(1487921806, 1),
"electionDate" : ISODate("2017-02-24T07:36:46Z"),
"configVersion" : 5,
"self" : true
},
{
"_id" : 1,
"name" : "linfl-PC:40001",
"health" : 0,
"state" : 8,
"stateStr" : "(not reachable/healthy)",
"uptime" : 0,
"optime" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat" : ISODate("2017-02-24T07:39:15.017Z"),
"lastHeartbeatRecv" : ISODate("2017-02-24T07:38:33.501Z"
),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "Couldn't get a connection with
in the time limit",
"configVersion" : -1
},
{
"_id" : 2,
"name" : "linfl-PC:40002",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 156,
"lastHeartbeat" : ISODate("2017-02-24T07:39:16.961Z"),
"lastHeartbeatRecv" : ISODate("2017-02-24T07:39:15.647Z"
),
"pingMs" : NumberLong(0),
"configVersion" : 5
}
],
"ok" : 1
}

You can see that the secondary node state has changed to 8 (the member is down), and lastHeartbeat Message shows: Couldn't get a connection within the time limit.

Insert a record into the primary node and view the status information:

rs0:PRIMARY> use cms
switched to db cms
rs0:PRIMARY> db.customers.insert({id:12,name:'zhangsan'})
WriteResult({ "nInserted" : 1 })
rs0:PRIMARY> rs.status()
{
"set" : "rs0",
"date" : ISODate("2017-02-24T07:46:58.458Z"),
"myState" : 1,
"term" : NumberLong(2),
"heartbeatIntervalMillis" : NumberLong(2000),
"members" : [
{
"_id" : 0,
"name" : "linfl-PC:40000",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 756,
"optime" : {
"ts" : Timestamp(1487922414, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2017-02-24T07:46:54Z"),
"electionTime" : Timestamp(1487921806, 1),
"electionDate" : ISODate("2017-02-24T07:36:46Z"),
"configVersion" : 5,
"self" : true
},
{
"_id" : 1,
"name" : "linfl-PC:40001",
"health" : 0,
"state" : 8,
"stateStr" : "(not reachable/healthy)",
"uptime" : 0,
"optime" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat" : ISODate("2017-02-24T07:46:57.445Z"),
"lastHeartbeatRecv" : ISODate("2017-02-24T07:38:33.501Z"
),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "����Ŀ�����������ܾ����޷����ӡ�",

"configVersion" : -1
},
{
"_id" : 2,
"name" : "linfl-PC:40002",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 617,
"lastHeartbeat" : ISODate("2017-02-24T07:46:57.005Z"),
"lastHeartbeatRecv" : ISODate("2017-02-24T07:46:55.655Z"
),
"pingMs" : NumberLong(0),
"configVersion" : 5
}
],
"ok" : 1
}
rs0:PRIMARY>







Check the optime information and find that there has been a change (scrambling under windows is unnecessary). Restart the secondary node:
And check again:

rs0:PRIMARY> rs.status()
{
"set" : "rs0",
"date" : ISODate("2017-02-24T07:49:51.633Z"),
"myState" : 1,
"term" : NumberLong(2),
"heartbeatIntervalMillis" : NumberLong(2000),
"members" : [
{
"_id" : 0,
"name" : "linfl-PC:40000",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 929,
"optime" : {
"ts" : Timestamp(1487922414, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2017-02-24T07:46:54Z"),
"electionTime" : Timestamp(1487921806, 1),
"electionDate" : ISODate("2017-02-24T07:36:46Z"),
"configVersion" : 5,
"self" : true
},
{
"_id" : 1,
"name" : "linfl-PC:40001",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 6,
"optime" : {
"ts" : Timestamp(1487921807, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2017-02-24T07:36:47Z"),
"lastHeartbeat" : ISODate("2017-02-24T07:49:51.570Z"),
"lastHeartbeatRecv" : ISODate("2017-02-24T07:49:47.386Z"
),
"pingMs" : NumberLong(0),
"configVersion" : 5
},
{
"_id" : 2,
"name" : "linfl-PC:40002",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 790,
"lastHeartbeat" : ISODate("2017-02-24T07:49:51.016Z"),
"lastHeartbeatRecv" : ISODate("2017-02-24T07:49:50.656Z"
),
"pingMs" : NumberLong(0),
"configVersion" : 5
}
],
"ok" : 1
}

You can see that primary is the same as t in section node optime (note: due to misoperation, two records have just been inserted).

Now let's experiment with the invalidation of the primary node, turn off the primary node, and check the status of the replication set.

D:\MongoDB\Server\3.2\bin>mongo --port 40001
2017-02-24T15:57:09.654+0800 I CONTROL [main] Hotfix KB2731284 or later update
is not installed, will zero-out data files
MongoDB shell version: 3.2.9
connecting to: 127.0.0.1:40001/test
rs0:PRIMARY> rs.status()
{
"set" : "rs0",
"date" : ISODate("2017-02-24T07:57:12.710Z"),
"myState" : 1,
"term" : NumberLong(3),
"heartbeatIntervalMillis" : NumberLong(2000),
"members" : [
{
"_id" : 0,
"name" : "linfl-PC:40000",
"health" : 0,
"state" : 8,
"stateStr" : "(not reachable/healthy)",
"uptime" : 0,
"optime" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat" : ISODate("2017-02-24T07:57:12.562Z"),
"lastHeartbeatRecv" : ISODate("2017-02-24T07:56:51.767Z"
),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "no response within election ti
meout period",
"configVersion" : -1
},
{
"_id" : 1,
"name" : "linfl-PC:40001",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 448,
"optime" : {
"ts" : Timestamp(1487923023, 1),
"t" : NumberLong(3)
},
"optimeDate" : ISODate("2017-02-24T07:57:03Z"),
"infoMessage" : "could not find member to sync from",
"electionTime" : Timestamp(1487923022, 1),
"electionDate" : ISODate("2017-02-24T07:57:02Z"),
"configVersion" : 5,
"self" : true
},
{
"_id" : 2,
"name" : "linfl-PC:40002",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 445,
"lastHeartbeat" : ISODate("2017-02-24T07:57:12.565Z"),
"lastHeartbeatRecv" : ISODate("2017-02-24T07:57:10.743Z"
),
"pingMs" : NumberLong(0),
"configVersion" : 5
}
],
"ok" : 1
}

You can see that with the adjustment of arbiter, the node of port 40000 has become secondary, while the node of port 40001 has become primary. At this time, insert a record and restart the node of port 40000 to see the status information of replication set:

rs0:PRIMARY> use cms
switched to db cms
rs0:PRIMARY> db.customers.insert({id:13,name:'wangwu'})
WriteResult({ "nInserted" : 1 })
rs0:PRIMARY> db.status()
{
"set" : "rs0",
"date" : ISODate("2017-02-24T08:23:17.763Z"),
"myState" : 1,
"term" : NumberLong(3),
"heartbeatIntervalMillis" : NumberLong(2000),
"members" : [
{
"_id" : 0,
"name" : "linfl-PC:40000",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 1085,
"optime" : {
"ts" : Timestamp(1487923321, 1),
"t" : NumberLong(3)
},
"optimeDate" : ISODate("2017-02-24T08:02:01Z"),
"lastHeartbeat" : ISODate("2017-02-24T08:23:16.122Z"),
"lastHeartbeatRecv" : ISODate("2017-02-24T08:23:16.122Z"
),
"pingMs" : NumberLong(0),
"syncingTo" : "linfl-PC:40001",
"configVersion" : 5
},
{
"_id" : 1,
"name" : "linfl-PC:40001",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 2013,
"optime" : {
"ts" : Timestamp(1487923321, 1),
"t" : NumberLong(3)
},
"optimeDate" : ISODate("2017-02-24T08:02:01Z"),
"electionTime" : Timestamp(1487923022, 1),
"electionDate" : ISODate("2017-02-24T07:57:02Z"),
"configVersion" : 5,
"self" : true
},
{
"_id" : 2,
"name" : "linfl-PC:40002",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 2010,
"lastHeartbeat" : ISODate("2017-02-24T08:23:17.132Z"),
"lastHeartbeatRecv" : ISODate("2017-02-24T08:23:15.847Z"
),
"pingMs" : NumberLong(0),
"configVersion" : 5
}
],
"ok" : 1
}

At this time, the data synchronization is completed and the replication set works normally.

Several matters needing attention:
1.MongoDB by default only reads and writes on the primary node
2. The application connects to the replication set, and the primary node fails. When the replication set is failing over, the replication assembly closes all socket connections to the application.
If the write operation occurs in the unsafe mode, there will be many uncertainties. The write operation in the safe mode, the driver will know which write operation is successful and which fails through the getLastError command, return the failure information to the application, and the application will decide how to deal with it.

Writing concerns

By default, replication sets only focus on primary nodes
When a write operation occurs in an application, the driver calls the getLastError command to return the execution of the write operation. The getLastError command is executed through the configured write concern option.
Common configurations are as follows:

1. Option w,-1 does not use write concerns, ignores all network or socket errors; 0 does not use write concerns, only returns network and socket errors; 1 uses write concerns, only for primary nodes (default configuration for replication sets and single instances); > 1, write concerns are effective for N nodes in replication sets, and only when all of them are executed, the client can receive feedback.
2. Option wtimeout, specifying how long the write concern will return, and not specifying may cause write blocking

Reading reference

Read reference refers to routing client read requests to specified members of the replication set, such as secondary. By default, read operations are routed to the primary node. Reading data from the primary node can ensure that the data is up-to-date. The data read from the secondary node may not be up-to-date, and it is not unacceptable for applications with low real-time requirements.
Reading reference can not improve the read and write capacity of the system, but it can route the client's read requests to the best secondary nodes (such as South China requests South China secondary), and improve the client's read efficiency.

Several modes of reading reference:
1.primary mode: All read requests are concentrated on the primary node, the primary node hangs, and the read operation produces errors or exceptions.
2.primarypreferred mode: In most cases, read requests are routed to the primary node, and if the primary node fails, read operations are routed to the secondary node.
3.secondary mode: read requests are all concentrated on the secondary node. If a secondary node is not available or unavailable, the read operation produces errors or exceptions.

4.secondarypreferred mode: The read operation is on the secondary node, which is used by the read operation when there is only one primary node in the replication set.
5.nearest mode: Read from the nearest node, regardless of the primary or secondary node

Reference resources

https://docs.mongodb.com/manual/replication/

Tags: MongoDB shell Database Mobile

Posted on Thu, 04 Apr 2019 21:42:30 -0400 by errorCode30