I have a test case, a sharding cluster with 1 shard.
The shard is rs, which has 1 primary and 2 secondaries.
My application uses `secondaryPreferred` policy, at first the queries balanced over two secondaries. Then I stop 1 secondary `10.160.243.22` to simulate fault, and then reboot it, the status is ok:
rs10032:PRIMARY> rs.status()
{
"set" : "rs10032",
"date" : ISODate("2014-12-05T09:21:07Z" ),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "10.160.243.22:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 2211,
"optime" : Timestamp(1417771218, 3),
"optimeDate" : ISODate("2014-12-05T09:20:18Z" ),
"lastHeartbeat" : ISODate("2014-12-05T09:21:05Z" ),
"lastHeartbeatRecv" : ISODate("2014-12-05T09:21:07Z" ),
"pingMs" : 0,
"lastHeartbeatMessage" : "syncing to:10.160.188.52:27017",
"syncingTo" : "10.160.188.52:27017"
},
{
"_id" : 1,
"name" : "10.160.188.52:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 2211,
"optime" : Timestamp(1417771218, 3),
"optimeDate" : ISODate("2014-12-05T09:20:18Z" ),
"electionTime" : Timestamp(1417770837, 1),
"electionDate" : ISODate("2014-12-05T09:13:57Z" ),
"self" : true
},
{
"_id" : 2,
"name" : "10.160.189.52:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 2209,
"optime" : Timestamp(1417771218, 3),
"optimeDate" : ISODate("2014-12-05T09:20:18Z" ),
"lastHeartbeat" : ISODate("2014-12-05T09:21:07Z" ),
"lastHeartbeatRecv" : ISODate("2014-12-05T09:21:06Z" ),
"pingMs" : 0,
"syncingTo" : "10.160.188.52:27017"
}
],
"ok" : 1
}
but all queries go to another secondary `10.160.188.52`, and `10.160.243.22` is idle
Why the queries not balanced to two secondaries after recovery and how to fix it ?
댓글 없음:
댓글 쓰기