Can someone help me with the example:-
Is this mongodump output from a replica set that was done with --oplog option?
Yes, i am trying to take mongodump using --oplog option for a single replica set as this option doesn't work in regular instance. But I am not able to restore the dump at a particular point in time.
How to do it??
How are you trying to restore to a point in time?
There's a description of something similar here:
http://stackoverflow.com/ questions/15444920/modify-and- replay-mongodb-oplog
Let me know what's not working for you when you try oplogReplay option
with oplogLimit as that's how you restore to a point in time.
There's a description of something similar here:
http://stackoverflow.com/
Let me know what's not working for you when you try oplogReplay option
with oplogLimit as that's how you restore to a point in time.
I am using below steps:-
5. vim oplog.txt
P.S. commercial pitch: MMS backup will do point-in-time for you automatically.
The timestamp at which employee collection got deleted is “1412852222”.
5. mkdir /data/snap1
6. mongod --fork --logpath /data/snap1/snap1.log --smallfiles --oplogSize 50 --port 27001 --dbpath /data/snap1
7. mongorestore --port 27001 dump-12 //dump-12 is the directory where scheduled dump was taken.
8. mongorestore --port 27001 --oplogReplay --oplogLimit " 1412852222:1" oplogR
connected to: 127.0.0.1:27001
The oplogLimit option cannot be used if normal databases/collections exist in the dump directory.
Please correct me what i am doing wrong
1. Deleted collectiion emp - db.emp.drop()
2. mongodump -d local -c oplog.rs -o oplogD
3. mv oplogD/local/oplog.rs.bson oplogR/oplog.bson
4. bsondump oplog.bson > oplog.txt
5. vim oplog.txt
{ "ts" : Timestamp( 1412162337, 1 ), "h" : NumberLong(0), "v" : 2, "op" : "n", "ns" : "", "o" : { "msg" : "initiating set" } }
{ "ts" : Timestamp( 1412164559, 1 ), "h" : NumberLong(- 7572054285177847791), "v" : 2, "op" : "d", "ns" : "opus.student", "b" : true, "o" : { "_id" : ObjectId( "542be25286717d363ee1a754" ) } }
{ "ts" : Timestamp( 1412569574, 1 ), "h" : NumberLong( 8263452013044470772), "v" : 2, "op" : "i", "ns" : "opus.emp", "o" : { "_id" : ObjectId( "543219e684a0a81da10b48f1" ), "eid" : 1, "name" : "user1", "ecode" : "e01" } }
{ "ts" : Timestamp( 1412569581, 1 ), "h" : NumberLong( 3761309732586329005), "v" : 2, "op" : "i", "ns" : "opus.emp", "o" : { "_id" : ObjectId( "543219ed84a0a81da10b48f2" ), "eid" : 2, "name" : "user2", "ecode" : "e02" } }
{ "ts" : Timestamp( 1412569588, 1 ), "h" : NumberLong(- 5927516312307750392), "v" : 2, "op" : "i", "ns" : "opus.emp", "o" : { "_id" : ObjectId( "543219f484a0a81da10b48f3" ), "eid" : 3, "name" : "user3", "ecode" : "e03" } }
{ "ts" : Timestamp( 1412569931, 1 ), "h" : NumberLong(- 252527007292873815), "v" : 2, "op" : "c", "ns" : "opus.$cmd", "o" : { "drop" : "emp" } }
6. mongorestore --oplogReplay --oplogLimit 1412569931:1 oplogR
The above command generates below error and I am not able to restore my collection emp:-
connected to: 127.0.0.1
2014-10-06T10:03:51.972+0530 Latest oplog entry on the server is 1412569931:1
2014-10-06T10:03:51.972+0530 Only applying oplog entries matching this criteria: { "ts" : { "$gt" : { "$timestamp" : { "t" : 1412569931, "i" : 1 } }, "$lt" : { "$timestamp" : { "t" : 1412569931, "i" : 1 } } } }
The oplogLimit option cannot be used if normal databases/collections exist in the dump directory.
I was doing the restore setup on same node, that is why it was not working. Doing the same steps on new node works fine.
But I have one doubt - At 10:00 backup occurs, till 10:30 some writes were there on collection then at 11:30 somebody deleted the collection. Now, my restoring steps at 13:00 involve:-
1. Restore the 10:00 backup on new node.
2. Take the oplog dump of previous node
3. Restore till 11:30 on new node by replaying log. But this will give the error:- The oplogLimit is not newer than the last oplog entry on the server. Because when i restored my first dump at 13:00 new logs have latest time of 13:00.
So, the above steps (replaying log to 11:30 time) require that the backup at 10:00 gets restored on new node at the same time.
Or there is any way out?
I think you did something different than what you list then.
When you restore the 10:00 backup on the new node, the latest
timestamp in the oplog is 10:00.
If you got the message "The oplogLimit is not newer than the last
oplog entry on the server." then you did something else: you say "
Because when i restored my first dump at 13:00 new logs have latesttime of 13:00" but I don't see where you restore the 13:00 dump (nor
should you be) - 13:00 is when you dump out the oplog only from the
"bad" (old) node.
You replay it (that oplog) on the new node only till 11:30.
Hope this makes sense.
When you restore the 10:00 backup on the new node, the latest
timestamp in the oplog is 10:00.
If you got the message "The oplogLimit is not newer than the last
oplog entry on the server." then you did something else: you say "
Because when i restored my first dump at 13:00 new logs have latesttime of 13:00" but I don't see where you restore the 13:00 dump (nor
should you be) - 13:00 is when you dump out the oplog only from the
"bad" (old) node.
You replay it (that oplog) on the new node only till 11:30.
Hope this makes sense.
At 13:00 I am restoring the 10:00 dump which is why latest timestamp on new node becomes 13:00.
Ah, I see the problem, your backup from 10am is not a file system backup, it's mongodump - and you're restoring it into a replica set.
The short of it is that's not a combination that will work for replaying of the next oplog because of timestamps in the oplog - mongorestore will basically insert the records rather than recreating the previous state of the files.
Depending on what you need to end up with there are several workarounds. You can do the restore into a standalone (non-replica) node and then convert it to a replica set. Or you can switch to using a file system based backup and restore in which case after restore you can replay the oplog as you are trying now.
I've been meaning to write up a blog entry on this ...
P.S. commercial pitch: MMS backup will do point-in-time for you automatically.
Thanks for your reply.
I did restore to a standalone node and then converted it to a replica set. Then i tried to replay logs but again it was showing same error "The oplogLimit is not newer than the last oplog entry on the server" because when i converted it to a replica set then it becomes the latest timestamp for oplog.
Replaying the old log will not work here.
And the file system is of huge size, so taking the file system backup will not be a solution for me.
You should replay oplog into standalone *first* and then convert it to a replica set.
Yes, that works. I was creating the replica set first before replaying the oplogs.
I am the same problem. I am using the same approach by following the below steps but getting error :-
1. mkdir oplogR
2. mongodump --port 27017 -d local -c oplog.rs –o oplogR
3. mv oplogR/local/oplog.rs.bson oplogR/oplog.bson
4. bsondump oplogR/oplog.bson > oplogR/oplog.txt
Snip of the oplog.txt file
{ "ts" : Timestamp( 1412852217, 1 ), "h" : NumberLong(- 7940391011466430660), "v" : 2, "op" : "i", "ns" : "opus.employee", "o" : { "_id" : ObjectId( "543669f9a61f96e4e2e9f761" ), "eid" : 3, "name" : "emp3", "gender" : "f" } }
{ "ts" : Timestamp( 1412852222, 1 ), "h" : NumberLong(- 345060801832550651), "v" : 2, "op" : "c", "ns" : "opus.$cmd", "o" : { "drop" : "employee" } }
The timestamp at which employee collection got deleted is “1412852222”.
5. mkdir /data/snap1
6. mongod --fork --logpath /data/snap1/snap1.log --smallfiles --oplogSize 50 --port 27001 --dbpath /data/snap1
7. mongorestore --port 27001 dump-12 //dump-12 is the directory where scheduled dump was taken.
connected to: 127.0.0.1:27001
The oplogLimit option cannot be used if normal databases/collections exist in the dump directory.
Please correct me what i am doing wrong
You need to remove the empty local directory under oplogR I believe.
If that doesn't fix it, let me know what version you are using.
I already deleted local directory still same error.
My mongodb version is 2.6.3
Move your oplog.bson file to an empty folder in different directory and then run mongorestore, it will work :)
I see the step you skipped.
You dumped the oplog into the same directory you're trying to read it from - that tends to cause problems (as you saw) in that other things may be there.
You dump into oplogD directory and then move oplogR into a brand new directory oplogR.
Another way of getting there is make sure that when you do
$ ls oplogR
the *only* file you see there is `oplog.bson` and *nothing* else.
댓글 없음:
댓글 쓰기