MongoDB Administration

This is a write up of Mathias Stearn's talk at the MongoUK conference, covering useful things for administrators to know when running MongoDB in production.

You can find Mathias on Twitter: @mathias_mongo.

Starting the server

Start the server like this:

$ mongod --dbpath /path/to/data

Stop it with Ctrl-C or with kill (but don't use kill -9, which doesn't give the server a chance to shut down cleanly and flush data to disk).

You can get one directory per database with the --directoryperdb option to mongod. This comes in handy if you want to put one database on a fast SSD filesystem, for example.

Checking Mongo's status

Server stats

Once you've connected to the server with the console you can ask Mongo how the server is getting on:

$ mongo
> db.serverStatus()

You'll see a JSON object with a bunch of information in it; Mathias talked us through various bits of it.

"globalLock" : {
      "totalTime" : 13014697,
      "lockTime" : 162,
      "ratio" : 0.00001244746612233846

The ["globalLock"]["lockTime"] value tells you how long the global lock has been held; if it's high it could mean your database is overloaded.

Mongo uses memory mapped files, which means that a lot of the memory reported by tools such as top may not actually represent RAM usage. Check mem["resident"], which tells you how much RAM Mongo is actually using.

"mem" : {
    "resident" : 2,
    "virtual" : 2396,
    "supported" : true,
    "mapped" : 0

You should also see a section called "extra_info"; the contents vary by platform, so you might not have it, but you can monitor heap_usage_bytes to see if you've got a memory leak.

The ["indexCounter"]["btree"]["missRatio"] field tells you whether or not the indexes you're using are stored in RAM; you want the missRatio to be as low as possible as Mongo performs at its best when your indexes fit in RAM.

The ["backgroundFlushing"]["average_ms"] number tells you how often Mongo is writing to disk. If it gets high it could be an indicator that your application is write bound.

The ["opcounters"] tell you how many operations you've run since the server started up. It would be more useful to see operations per unit time, but I think they might be working on tools that can report this.

Database and collection stats

db.stats() let's you work out whether or not your indexes will fit in RAM.

> db.stats()             
        "collections" : 2,
        "objects" : 2,
        "dataSize" : 92,
        "storageSize" : 5632,
        "numExtents" : 2,
        "indexes" : 1,
        "indexSize" : 8192,
        "ok" : 1

If your indexes are massive then db.collection.stats() will reveal which index is huge.

> db.collection.stats()  # replace "collection" with name of yours

Viewing stats without the mongo console

Visit http://localhost:28017 (28017 is the port the server is running on, plus 1000) to get an overview of what's going on.

You can also query http://localhost:28017/_status to see the same data that db.serverStatus() returns, but in JSON format.

Investigating performance issues

Run mongostat to get a list of useful performance stats by unit time.

If you want to load test a database you'll probably find that mongod isn't the bottleneck; the tool you're using to load the database could well be. So run your load testing tool on a different machine.

Keep an eye on the output of iostat -x 2, which (amongst other things) will tell you the bandwidth utilization (%util) for a device.

What's Mongo doing now?

Run db.currentOp() in the console to see the current query. You'll only really see any useful output if Mongo is in the middle of a long running query (at which point it can become very useful).


There are basically two approaches to backing up a Mongo database:

  1. mongodump and mongorestore are the classic approach. Dumps the contents of the database to files. The backup is stored in the same format as Mongo uses internally, so is very efficient. But it's not a point-in-time snapshot.
  2. To get a point-in-time snapshot, shut the database down, copy the disk files (e.g. with cp) and then start mongod up again.

Alternatively, rather than shutting mongod down before making your point-in-time snapshot, you could just stop it from accepting writes:

> db._adminCommand({fsync: 1, lock: 1})
        "info" : "now locked against writes, use db.$cmd.sys.unlock.findOne() to unlock",
        "ok" : 1

To unlock the database again, you need to switch to the admin database and then unlock it:

> use admin
switched to db admin
> db.$cmd.sys.unlock.findOne()
{ "ok" : 1, "info" : "unlock requested" }

If you don't switch to the admin database first you'll get an unauthorized error:

> db._adminCommand({fsync: 1, lock: 1})                                                                          
        "info" : "now locked against writes, use db.$cmd.sys.unlock.findOne() to unlock",
        "ok" : 1
> db.$cmd.sys.unlock.findOne()
{ "err" : "unauthorized" }

You can take a point in time snapshot from a slave just as easily as your master database, which avoids downtime. This is one of the reasons that running a slave is so strongly recommended...


Do it (did you read the previous section?). Seriously.

Start your master and slave up like this:

$ mongod --master --oplogSize 500
$ mongod --slave --source localhost:27017 --port 3000 --dbpath /data/slave

When seeding a new slave server from master use the --fastsync option.

You can see what's going on with these two commands:

> db.printReplicationInfo()  # tells you how long your oplog will last
> db.printSlaveReplicationInfo()  # tells you how far behind the slave is

If the slave isn't keeping up, how do you find out what's going on? Check the mongo log for any recent errors. Try connecting with the mongo console. Try running queries from the console to see if everything is working. Run the status commands above to try and find out which database is taking up resources. If you can't work it out hop on the IRC channel; Mathias says they'll be very responsive.

Running MongoDB in production

10gen recommend that you use their packages, not the version that ships with your distribution. The latest production release (with an even version number) really is the greatest.

Make sure your init scripts shut your database down cleanly. Hang in a loop until the database actually shuts down. If you lose power or shutdown uncleanly there are no guarantees about what's on disk, so it's best to restore from a backup. There's a --repair option, but it's just a best effort at recovery rather than any attempt to guarantee data integrity. If data hadn't been flushed to disk when you lost power (or ran kill -9) then it's gone.

Use --fork and --logpath in your init scripts. --logpath supports log rotation internally.

And use replication to ensure that your data is stored on multiple machines.

If you're running a critical system 10gen provide support contracts.

I love feedback and questions — please feel free to get in touch on Mastodon or Twitter, or leave a comment.