Select Page

MongoDB is a good answer to the NoSQL problem, and if we are using it in a production account we should deploy it in a Replica Set. Here is a quick rundown on how to do that on a AWS EC2.

What is a Read Replica

A read replica is a install of MongoDB that is distributed over many nodes. This means that if one of the nodes becomes unresponsive then one of the other nodes can take over and you will still have access to your data.

Deploying a Read Replica Set

Read replica’s needs to have a odd number of nodes. When a replica set is created or loses its primary node it holds an election to select a new primary node, having a odd number of nodes ensure that the elections are not hung and migrations to a new primary go smoothly.
For this example we will be deploying to 3 EC2’s.

First step is to create our instances.


The size of the nodes will depend on your application but you should keep in mind that each node will need to be large enough to handle all of the applications traffic on its own. t2.medium for each node should be a good place to start, with a m3.medium being a good step up for active production instances.

Here is MongoDB’s recommendations for maximising performance

Creating the Nodes

Now that we know the size of the nodes we will need to create 3 ec2’s to host our MongoDB cluster.

For this install we are using Ubuntu 16.04.
Select Launch Instance from the AWS EC2 console.


Select the Ubuntu 16.04 AMI

Select the desired instance type

Select the network and subnet that the instance should be deploy in, making sure the instance will be able to be accessed from your applications that need it.

Now it is time to choose storage options, somewhere we have a lot of options to consider. For this tutorial I will just separate out MongoDB’s data direction onto its own disk, but if you follow MongoDB’s recommendations you should also create separate storage drives for logs and the db’s journal.

For now create a second volume where we are going to store data, at least 60 GB big but the size really depends on your application. If we are deploying for production we should choose provisioned IOPS SSD as the volume type, and provision 1000 IOPS for that drive. You also most likely want to increase the root device to about 20GB.
It is also a good idea to encrypt the data storage device to increase security.

Click next, tag your instances and launch them!

Repeat 3 times so that we have the nodes we need to create our Replica Set

Creating a XFS disk for data

MongoDB’s WiredTiger engine loves to run on XFS type disks. This is easy enough to achieve on an EC2.

First up find which device is the data drive. It is most likely /dev/xvdb as it is the second drive mounted. To find it run ‘sudo fdisk -l’ and find a drive that matches the description. In the AWS console a drive mounted on /dev/sdb will translate to /dev/xvdb so it shouldn’t be too much trouble to find.
Once we know which device we are dealing with we need to format it as XFS, easily accomplished with

'sudo mkfs.xfs <drive>'

This command should completed quickly. We now need to tell ubuntu to mount the drive on launch. Edit /etc/fstab with ‘sudo nano /etc/fstab’ and add the following line to the bottom

<drive> /mnt/db xfs rw,nobarrier,auto 0 0

This mounts the drive on /mnt/db on boot. Reboot the instance to check everything is working. Repeat this step on each nodes!

Disabling Transparent Huge Pages

“Transparent Huge Pages (THP) is a Linux memory management system that reduces the overhead of Translation Lookaside Buffer (TLB) lookups on machines with large amounts of memory by using larger memory pages.
However, database workloads often perform poorly with THP, because they tend to have sparse rather than contiguous memory access patterns.
You should disable THP on Linux machines to ensure best performance with MongoDB.” –

Disable THP by saving the following file to /etc/init.d/disable-transparent-hugepages

# Provides:          disable-transparent-hugepages
# Required-Start:    $local_fs
# Required-Stop:
# X-Start-Before:    mongod mongodb-mms-automation-agent
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Disable Linux transparent huge pages
# Description:       Disable Linux transparent huge pages, to improve
#                    database performance.
case $1 in
    if [ -d /sys/kernel/mm/transparent_hugepage ]; then
    elif [ -d /sys/kernel/mm/redhat_transparent_hugepage ]; then
      return 0
    echo 'never' > ${thp_path}/enabled
    echo 'never' > ${thp_path}/defrag
    if [[ $(cat ${thp_path}/khugepaged/defrag) =~ $re ]]
      # RHEL 7
      echo 0  > ${thp_path}/khugepaged/defrag
      # RHEL 6
      echo 'no' > ${thp_path}/khugepaged/defrag
    unset re
    unset thp_path

Then enable the change with

 chmod 755 /etc/init.d/disable-transparent-hugepages
 update-rc.d disable-transparent-hugepages defaults

Repeat on each node and reboot to enable the changes!

Install MongoDB Community

For more detail follow Sarah’s blog here. But the quick strokes are below:

echo Updating system
apt-get update
apt-get upgrade -y

echo Adding mongodb to apt
apt-key adv --keyserver hkp:// --recv 0C49F3730359A14518585931BC711F9BA15703C6
echo "deb [ arch=amd64,arm64 ] xenial/mongodb-org/3.4 multiverse" | tee /etc/apt/sources.list.d/mongodb-org-3.4.list
apt-get update
echo Installing mongodb
apt-get install -y mongodb-org

We now have MongoDB installed, rinse and repeat for each box.

Enable MongoDB Authentication

Edit the MongoDB configuration file to enable authentication, again Sarah’s blog has more detail

sudo nano /etc/mongod.conf

and change the security setting to match

 authorization: enabled

Now create a admin user for use in the database:

Connect to MongoDB


Run this command to create a user.

db.createUser({user: "admin", pwd: "<PASSWORD>", roles:[{role: "root", db: "admin"}]})

Remember to note down the password.

Exit mongo shell


You can test that the admin user was created successfully by entering this command. The Mongo shell should be closed.

mongo -u admin -p --authenticationDatabase admin

Creating the cluster

First step is to tell mongodb the name of the replica set. Stop mongodb with

sudo systemctl stop mongodb

and then update the replication section of the config file with

  replSetName: rs0

Start MongoDB again

sudo systemctl start mongodb

Repeat this process for each node.

Next step is to create a shared keyfile the servers can use to authenticate the connection between them.

On ONE of the host run the following command to generate a keyfile

sudo openssl rand -base64 756 > /etc/ssl/mongodb-internal.key
sudo chown mongodb:mongodb /etc/ssl/mongodb-internal.key
chmod 400 /etc/ssl/mongodb-internal.key

Then copy this file to the other nodes, into /etc/ssl/mongodb-internal.key

Update the MongoDB config to use this key file, change the security section as follows

  authorization: enabled
  keyFile: /etc/ssl/mongointernal.key

Next we need to create a replica set.

Open the mongodb cli on one node by running

mongo -u admin -p --authenticationDatabase admin

and run the following command

    _id : <replicaSetName>,
    members: [
      { _id : 0, host : "<nodeOneIp>:27017" },
      { _id : 1, host : "<nodeTwoIp>:27017" },
      { _id : 2, host : "<nodeThreeIp>:27017" }

All goes well you now have a running MongoDB Replica Set!
Run rs.status() to check, and visit the log file to see things going on.

Until next time…
Tim Gray
Coffee to Code


Read Tim’s other Blog Posts

%d bloggers like this: