Sam Helman wrote:
Is this what you are looking for? http://docs.mongodb.org/manual/core/replication-internals/# election-internals
Srikar A wrote:
An odd number set is recommended. My doubt is as one goes down from an odd set, we have an even number set. The number of members fluctuate between even and odd when they go down one by one. We always don’t have odd member scenario. Can some one explain how MongoDB voting works?
It isn't an issue to be left with an even number of voting members. What is necessary is the ability to reach a majority win. In other words, if you are left with 4 voting members, you can still reach 3 members voting for one new primary.
It isn't an issue to be left with an even number of voting members. What is necessary is the ability to reach a majority win. In other words, if you are left with 4 voting members, you can still reach 3 members voting for one new primary.
Hi Megha, Scott,
To be clear: the number of nodes in a replica set does not fluctuate unless you change the configuration (i.e. adding or removing members). A 5 node replica set with 2 nodes down is still a 5 node replica set.
Although "always have an odd number of voting members" is a helpful general guideline, some things to keep in mind are:
- Voting works on a quorum basis, where a strict majority of voting nodes must be available in order to elect or maintain a primary
- A strict majority requires more than half of the voting nodes successfully communicating with each other (i.e. at least n/2 + 1, where n is the number of voting nodes)
- Even number of voting nodes in a replica set are strongly discouraged if you want high availability
Why are even numbers of voting nodes discouraged?
- The strict majority of a 2 node replica set is 2 (no automatic failover possible if either node is down for any reason)
- In the event of network problems, it's possible to have a tied vote (for example, with a replica set equally split across two data centres)
- Adding one extra node to a replica set with a even number of voting nodes will increase the fault tolerance. 4 and 5 node replica sets both have a strict majority of 3 votes, but the 5-node set can survive two node failures instead of one: http://docs.mongodb.org/ manual/core/replica-set- architectures/#consider-fault- tolerance.
The documentation goes into more details about replication & election mechanics: http://docs. mongodb.org/manual/core/ replica-set-elections/.
This seems to be always a nice point for confusion. And I am still confused. LOL!
If you have a 5 node replica set, and two nodes go down, say for networking reasons, how can those two still vote? Who will vote? The last 3 nodes? So, there you have a majority of two and all is ok?
Now let's say only one node goes down for the same reasons. Now you have 4 nodes. A majority of 3 is still possible and there are no problems.
Or am I totally off?
This seems to be always a nice point for confusion. And I am still confused. LOL!
Hi Scott,
A common mistake is thinking that the replica set configuration or majority changes based on the current state of replica set members (particularly when members are down/recovering). The replica set configuration only changes when you explicitly reconfigure (add/remove nodes, update priorities, etc). In the mongo shell, you can check your configuration with rs.conf() and the current node state withrs.status().
It might help to think of this as analogous to a RAID configuration: if one or more nodes is down your deployment will be in "degraded" mode but can still have full availability depending on the level of fault tolerance in your deployment. When the nodes recover (or are replaced) your deployment will be "healthy" once again.
If you have a 5 node replica set, and two nodes go down, say for networking reasons, how can those two still vote? Who will vote? The last 3 nodes? So, there you have a majority of two and all is ok?
Now let's say only one node goes down for the same reasons. Now you have 4 nodes. A majority of 3 is still possible and there are no problems.
In both cases the replica set still has 5 nodes (majority vote is 3). If 1 or 2 nodes are down, the remaining nodes can still reach consensus.
Interesting. Thanks for the explanation Stephen. I hope I understand now. LOL!
댓글 없음:
댓글 쓰기