we are currently on version 2.4.8. i have a replicate set consisting of a primary and 2 secondaries. there is a process which when it runs causes the oplog window to go from 75 hours to 1.5.
the lag is less than 1 minute. should i increase the size of the oplog so it never goes this low ? if the data being processed keeps growing then i have to keep monitoring the oplog window and increase it from time to time ? or do i just make it enormous now and take the space away from the user databases ?
i am not sure what to do here, any insight would be greatly appreciated. thanks
hi folks. any thoughts on this ? should i really dramatically increase the size of the oplog for an every few days batch process ? i say dramatically because it is 16Gb now and it is going down to 1.5 hrs so i am thinking i need to double the size.
I don't know the answers to your questions, but out of curiosity, could you explain a bit about the batch process happening causing the flood of updates or writes? What is happening or being done?
Yes, 100% yes, you should increase the size of your oplog so that
during its most intensive writing periods the replication window never
goes this short.
Imagine that your secondary or network fails right at the beginning of
this intensive load. It comes back an hour and a half later and ...
you have to do a full resync because the secondary fell off the oplog!
scott, thank you for answering. the query is generally an update with a set of millions of records. i do not have the exact query yet. i will get it and post it if i can.
asya, thank you for your response. i will increase it.
댓글 없음:
댓글 쓰기