Graphic-42

This post is part of our ongoing series on running MongoDB on Kubernetes.  We’ve published a number of articles about running MongoDB on Kubernetes for specific platforms and for specific use cases.  If you are looking for a specific Kubernetes platform, check out these related articles.

Running HA MongoDB on Azure Kubernetes Service (AKS)

Running HA MongoDB on Amazon Elastic Container Service for Kubernetes (EKS)

Running HA MongoDB on Red Hat OpenShift

Running HA MongoDB on Google Kubernetes Engine (GKE)

Running HA MongoDB on IBM Cloud Kubernetes Service (IKS)

Running HA MongoDB on IBM Cloud Private

Failover MongoDB 300% faster and run only 1/3 the pods

And now, onto the post…

Rancher Kubernetes Engine (RKE) is a light-weight Kubernetes installer that supports installation on bare-metal and virtualized servers. RKE solves a common issue in the Kubernetes community: installation complexity. With RKE, Kubernetes installation is simplified, regardless of what operating systems and platforms you’re running.

Portworx is a cloud native storage platform to run persistent workloads deployed on a variety of orchestration engines including Kubernetes. With Portworx, customers can manage the database of their choice on any infrastructure using any container scheduler. It provides a single data management layer for all stateful services, no matter where they run.

This tutorial is a walk-through of the steps involved in deploying and managing a highly available MongoDB NoSQL database on a Kubernetes cluster deployed in AWS through RKE.

In summary, to run HA MongoDB on Amazon you need to:

  1. Install a Kubernetes cluster through Rancher Kubernetes Engine
  2. Install a cloud native storage solution like Portworx as a DaemonSet on Kubernetes
  3. Create a storage class defining your storage requirements like replication factor, snapshot policy, and performance profile
  4. Deploy MongoDB using Kubernetes
  5. Test failover by killing or cordoning nodes in your cluster
  6. Dynamically resize MongoDB volume
  7. Take a snapshot and backup MongoDB to object storage

How to set up a Kubernetes Cluster with RKE

RKE is a tool to install and configure Kubernetes in a choice of environments including bare metal, virtual machines, and IaaS. For this tutorial, we will be launching a 3-node Kubernetes cluster in Amazon EC2.

For a detailed step-by-step guide, please refer to this tutorial from The New Stack.

By the end of this step, you should have a cluster with one master and three worker nodes.

px-mongo-rke-0

Installing Portworx in Kubernetes

Installing Portworx on RKE-based Kubernetes is not different from installing it on a Kubernetes cluster setup through Kops. Portworx documentation has the steps involved in running the Portworx cluster in a Kubernetes environment deployed in AWS.

The New Stack tutorial mentioned in the previous section also covers all the steps to deploy Portworx DaemonSet in Kubernetes.

px-mongo-rke-1

Once the Kubernetes cluster is up and running, and Portworx is installed and configured, we will deploy a highly available MongoDB NoSQL database.

Creating a storage class for MongoDB

Once the Kubernetes cluster is up and running, and Portworx is installed and configured, we will deploy a highly available MongoDB database.

Through storage class objects, an admin can define different classes of Portworx volumes that are offered in a cluster. These classes will be used during the dynamic provisioning of volumes. The storage class defines the replication factor, I/O profile (e.g., for a database or a CMS), and priority (e.g., SSD or HDD). These parameters impact the availability and throughput of workloads and can be specified for each volume. This is important because a production database will have different requirements than a development Jenkins cluster.

In this example, the storage class that we deploy has a replication factor of 3 with I/O profile set to “db,” and priority set to “high.” This means that the storage will be optimized for low latency database workloads like MongoDB and automatically placed on the highest performance storage available in the cluster. Notice that we also mention the filesystem, xfs in the storage class.

$ cat > px-mongo-sc.yaml << EOF
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
    name: px-ha-sc
provisioner: kubernetes.io/portworx-volume
parameters:
  repl: "3"
  io_profile: "db_remote"
  priority_io: "high"
  fs: "xfs"
EOF

Create the storage class and verify its available in the default namespace.

$ kubectl create -f px-mongo-sc.yaml
storageclass.storage.k8s.io "px-ha-sc" created

$ kubectl get sc
NAME                PROVISIONER                     AGE
px-ha-sc            kubernetes.io/portworx-volume   10s
stork-snapshot-sc   stork-snapshot                  3d

Creating a MongoDB PVC on Kubernetes

We can now create a Persistent Volume Claim (PVC) based on the Storage Class. Thanks to dynamic provisioning, the claims will be created without explicitly provisioning a persistent volume (PV).

$ cat > px-mongo-pvc.yaml << EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
   name: px-mongo-pvc
   annotations:
     volume.beta.kubernetes.io/storage-class: px-ha-sc
spec:
   accessModes:
     - ReadWriteOnce
   resources:
     requests:
       storage: 1Gi
EOF

$ kubectl create -f px-mongo-pvc.yaml
persistentvolumeclaim "px-mongo-pvc" created

$ kubectl get pvc
NAME           STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
px-mongo-pvc   Bound     pvc-e0acf1df-9231-11e8-864b-0abd3d2e35a4   1Gi       RWO            px-ha-sc       19s

Deploying MongoDB on Kubernetes

Finally, let’s create a MongoDB instance as a Kubernetes deployment object. For simplicity’s sake, we will just be deploying a single Mongo pod. Because Portworx provides synchronous replication for High Availability, a single MongoDB instance might be the best deployment option for your MongoDB database. Portworx can also provide backing volumes for multi-node MongoDB replica sets. The choice is yours.

$ cat > px-mongo-app.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mongo
spec:
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  replicas: 1
  selector:
    matchLabels:
      app: mongo  
  template:
    metadata:
      labels:
        app: mongo
    spec:
      schedulerName: stork
      containers:
      - name: mongo
        image: mongo
        imagePullPolicy: "Always"
        ports:
        - containerPort: 27017
        volumeMounts:
        - mountPath: /data/db
          name: mongodb
      volumes:
      - name: mongodb
        persistentVolumeClaim:
          claimName: px-mongo-pvc
EOF
$ kubectl create -f px-mongo-app.yaml
deployment.extensions "mongo" created

The MongoDB deployment defined above is explicitly associated with the PVC, px-mongo-pvc created in the previous step.

This deployment creates a single pod running MongoDB backed by Portworx.

$ kubectl get pods
NAME                     READY     STATUS    RESTARTS   AGE
mongo-68cc69bc95-mxqsb   1/1       Running   0          54s

We can inspect the Portworx volume by accessing the pxctl tool running with the Mongo pod.

$ VOL=`kubectl get pvc | grep px-mongo-pvc | awk '{print $3}'`
$ PX_POD=$(kubectl get pods -l name=portworx -n kube-system -o jsonpath='{.items[0].metadata.name}')
$ kubectl exec -it $PX_POD -n kube-system -- /opt/pwx/bin/pxctl volume inspect ${VOL}
Volume	:  901740686222600192
	Name            	 :  pvc-1a402e4a-9962-11e8-bf73-02cbd76cefba
	Size            	 :  1.0 GiB
	Format          	 :  xfs
	HA              	 :  3
	IO Priority     	 :  LOW
	Creation time   	 :  Aug 6 10:18:48 UTC 2018
	Shared          	 :  no
	Status          	 :  up
	State           	 :  Attached: ip-192-168-201-131.us-west-2.compute.internal
	Device Path     	 :  /dev/pxd/pxd901740686222600192
	Labels          	 :  namespace=default,pvc=px-mongo-pvc
	Reads           	 :  52
	Reads MS        	 :  548
	Bytes Read      	 :  225280
	Writes          	 :  112
	Writes MS       	 :  188
	Bytes Written   	 :  2510848
	IOs in progress 	 :  0
	Bytes used      	 :  10 MiB
	Replica sets on nodes:
		Set  0
		  Node 		 :  192.168.181.241 (Pool 0)
		  Node 		 :  192.168.113.226 (Pool 0)
		  Node 		 :  192.168.201.131 (Pool 0)
	Replication Status	 :  Up

px

The output from the above command confirms the creation of volumes that are backing the MongoDB database instance.

Failing over MongoDB pod on Kubernetes

Populating sample data

Let’s populate the database with some sample data.
We will first find the pod that’s running MongoDB to access the shell.

$ POD=`kubectl get pods -l app=mongo | grep Running | grep 1/1 | awk '{print $1}'`

$ kubectl exec -it $POD mongo
MongoDB shell version v4.0.0
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 4.0.0
Welcome to the MongoDB shell.
…..

Now that we are inside the shell, we can populate a collection.

db.ships.insert({name:'USS Enterprise-D',operator:'Starfleet',type:'Explorer',class:'Galaxy',crew:750,codes:[10,11,12]})
db.ships.insert({name:'USS Prometheus',operator:'Starfleet',class:'Prometheus',crew:4,codes:[1,14,17]})
db.ships.insert({name:'USS Defiant',operator:'Starfleet',class:'Defiant',crew:50,codes:[10,17,19]})
db.ships.insert({name:'IKS Buruk',operator:' Klingon Empire',class:'Warship',crew:40,codes:[100,110,120]})
db.ships.insert({name:'IKS Somraw',operator:' Klingon Empire',class:'Raptor',crew:50,codes:[101,111,120]})
db.ships.insert({name:'Scimitar',operator:'Romulan Star Empire',type:'Warbird',class:'Warbird',crew:25,codes:[201,211,220]})
db.ships.insert({name:'Narada',operator:'Romulan Star Empire',type:'Warbird',class:'Warbird',crew:65,codes:[251,251,220]})

Let’s run a few queries on the Mongo collection.

Find one arbitrary document:

db.ships.findOne()
{
	"_id" : ObjectId("5b5c16221108c314d4c000cd"),
	"name" : "USS Enterprise-D",
	"operator" : "Starfleet",
	"type" : "Explorer",
	"class" : "Galaxy",
	"crew" : 750,
	"codes" : [
		10,
		11,
		12
	]
}

Find all documents and using nice formatting:

db.ships.find().pretty()
…..
{
	"_id" : ObjectId("5b5c16221108c314d4c000d1"),
	"name" : "IKS Somraw",
	"operator" : " Klingon Empire",
	"class" : "Raptor",
	"crew" : 50,
	"codes" : [
		101,
		111,
		120
	]
}
{
	"_id" : ObjectId("5b5c16221108c314d4c000d2"),
	"name" : "Scimitar",
	"operator" : "Romulan Star Empire",
	"type" : "Warbird",
	"class" : "Warbird",
	"crew" : 25,
	"codes" : [
		201,
		211,
		220
	]
}
…..

Shows only the names of the ships:

db.ships.find({}, {name:true, _id:false})
{ "name" : "USS Enterprise-D" }
{ "name" : "USS Prometheus" }
{ "name" : "USS Defiant" }
{ "name" : "IKS Buruk" }
{ "name" : "IKS Somraw" }
{ "name" : "Scimitar" }
{ "name" : "Narada" }

px-mongo-rke-3
Finds one document by attribute:

db.ships.findOne({'name':'USS Defiant'})
{
	"_id" : ObjectId("5b5c16221108c314d4c000cf"),
	"name" : "USS Defiant",
	"operator" : "Starfleet",
	"class" : "Defiant",
	"crew" : 50,
	"codes" : [
		10,
		17,
		19
	]
}

Exit from the client shell to return to the host.

Simulating node failure

Now, let’s simulate the node failure by cordoning off the node on which MongoDB is running.

$ NODE=`kubectl get pods -l app=mongo -o wide | grep -v NAME | awk '{print $7}'`

$ kubectl cordon ${NODE}
node/ip-172-31-29-132.ap-south-1.compute.internal cordoned

The above command disabled scheduling on one of the nodes.

$ kubectl get nodes
NAME                                           STATUS                     ROLES               AGE   VERSION
ip-172-31-24-121.ap-south-1.compute.internal   Ready                      worker              47h   v1.13.4
ip-172-31-26-49.ap-south-1.compute.internal    Ready                      controlplane,etcd   47h   v1.13.4
ip-172-31-28-65.ap-south-1.compute.internal    Ready                      worker              47h   v1.13.4
ip-172-31-29-132.ap-south-1.compute.internal   Ready,SchedulingDisabled   worker              47h   v1.13.4

Now, let’s go ahead and delete the MongoDB pod.

$ POD=`kubectl get pods -l app=mongo -o wide | grep -v NAME | awk '{print $1}'`
$ kubectl delete pod ${POD}
pod "mongo-68cc69bc95-7q96h" deleted

As soon as the pod is deleted, it is relocated to the node with the replicated data, even when that node is in a different Availability Zone. STorage ORchestrator for Kubernetes (STORK), a Portworx-contributed open source storage scheduler, ensures that the pod is rescheduled on the exact node where the data is stored.

Let’s verify this by running the below command. We will notice that a new pod has been created and scheduled in a different node.

$ kubectl get pods -l app=mongo -o wide
NAME                     READY     STATUS    RESTARTS   AGE       IP               NODE
mongo-68cc69bc95-thqbm   1/1       Running   0          19s       192.168.82.119   ip-172-31-24-121.ap-south-1.compute.internal

Let’s uncordon the node to bring it back to action.

$ kubectl uncordon ${NODE}
node/ip-172-31-29-132.ap-south-1.compute.internal uncordoned

Finally, let’s verify that the data is still available.

Verifying that the data is intact

Let’s find the pod name and run the ‘exec’ command, and then access the Mongo shell.

POD=`kubectl get pods -l app=mongo | grep Running | grep 1/1 | awk '{print $1}'`
kubectl exec -it $POD mongo
MongoDB shell version v4.0.0
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 4.0.0
Welcome to the MongoDB shell.
…..

We will query the collection to verify that the data is intact.

Find one arbitrary document:

db.ships.findOne()
{
	"_id" : ObjectId("5b5c16221108c314d4c000cd"),
	"name" : "USS Enterprise-D",
	"operator" : "Starfleet",
	"type" : "Explorer",
	"class" : "Galaxy",
	"crew" : 750,
	"codes" : [
		10,
		11,
		12
	]
}

Find all documents and using nice formatting:

db.ships.find().pretty()
…..
{
	"_id" : ObjectId("5b5c16221108c314d4c000d1"),
	"name" : "IKS Somraw",
	"operator" : " Klingon Empire",
	"class" : "Raptor",
	"crew" : 50,
	"codes" : [
		101,
		111,
		120
	]
}
{
	"_id" : ObjectId("5b5c16221108c314d4c000d2"),
	"name" : "Scimitar",
	"operator" : "Romulan Star Empire",
	"type" : "Warbird",
	"class" : "Warbird",
	"crew" : 25,
	"codes" : [
		201,
		211,
		220
	]
}
…..

Shows only the names of the ships:

db.ships.find({}, {name:true, _id:false})
{ "name" : "USS Enterprise-D" }
{ "name" : "USS Prometheus" }
{ "name" : "USS Defiant" }
{ "name" : "IKS Buruk" }
{ "name" : "IKS Somraw" }
{ "name" : "Scimitar" }
{ "name" : "Narada" }

Finds one document by attribute:

db.ships.findOne({'name':Narada'})
{
	"_id" : ObjectId("5b5c16221108c314d4c000d3"),
	"name" : "Narada",
	"operator" : "Romulan Star Empire",
	"type" : "Warbird",
	"class" : "Warbird",
	"crew" : 65,
	"codes" : [
		251,
		251,
		220
	]
}

Observe that the collection is still there and all the content is intact! Exit from the client shell to return to the host.

Performing Storage Operations on MongoDB

After testing end-to-end failover of the database, let’s perform StorageOps for MongoDB on our Kubernetes cluster.

Expanding the Kubernetes Volume with no downtime

Currently, the Portworx volume that we created at the beginning is of 1Gib size. We will now expand it to double the storage capacity.

First, let’s get the volume name and inspect it through the pxctl tool.

If you have access, SSH into one of the nodes and run the following command.

$ POD=`/opt/pwx/bin/pxctl volume list --label pvc=px-mongo-pvc | grep -v ID | awk '{print $1}'`
$ /opt/pwx/bin/pxctl v i $POD
Volume	:  901740686222600192
	Name            	 :  pvc-1a402e4a-9962-11e8-bf73-02cbd76cefba
	Size            	 :  1.0 GiB
	Format          	 :  xfs
	HA              	 :  3
	IO Priority     	 :  LOW
	Creation time   	 :  Aug 6 10:18:48 UTC 2018
	Shared          	 :  no
	Status          	 :  up
	State           	 :  Attached: ip-192-168-113-226.us-west-2.compute.internal
	Device Path     	 :  /dev/pxd/pxd901740686222600192
	Labels          	 :  namespace=default,pvc=px-mongo-pvc
	Reads           	 :  90
	Reads MS        	 :  88
	Bytes Read      	 :  1273856
	Writes          	 :  424
	Writes MS       	 :  7660
	Bytes Written   	 :  317349888
	IOs in progress 	 :  0
	Bytes used      	 :  11 MiB
	Replica sets on nodes:
		Set  0
		  Node 		 :  192.168.181.241 (Pool 0)
		  Node 		 :  192.168.113.226 (Pool 0)
		  Node 		 :  192.168.201.131 (Pool 0)
	Replication Status	 :  Up

Notice the current Portworx volume. It is 1GiB. Let’s expand it to 2GiB.

$ /opt/pwx/bin/pxctl volume update $POD --size=2
Update Volume: Volume update successful for volume 901740686222600192

Check the new volume size.

$ /opt/pwx/bin/pxctl v i $POD
Volume	:  901740686222600192
	Name            	 :  pvc-1a402e4a-9962-11e8-bf73-02cbd76cefba
	Size            	 :  2.0 GiB
	Format          	 :  xfs
	HA              	 :  3
	IO Priority     	 :  LOW
	Creation time   	 :  Aug 6 10:18:48 UTC 2018
	Shared          	 :  no
	Status          	 :  up
	State           	 :  Attached: ip-192-168-113-226.us-west-2.compute.internal
	Device Path     	 :  /dev/pxd/pxd901740686222600192
	Labels          	 :  namespace=default,pvc=px-mongo-pvc
	Reads           	 :  102
	Reads MS        	 :  96
	Bytes Read      	 :  1323008
	Writes          	 :  466
	Writes MS       	 :  7680
	Bytes Written   	 :  317526016
	IOs in progress 	 :  0
	Bytes used      	 :  11 MiB
	Replica sets on nodes:
		Set  0
		  Node 		 :  192.168.181.241 (Pool 0)
		  Node 		 :  192.168.113.226 (Pool 0)
		  Node 		 :  192.168.201.131 (Pool 0)
	Replication Status	 :  Up

px

Taking Snapshots of a Kubernetes volume and restoring the database

Portworx supports creating snapshots for Kubernetes PVCs.
Let’s create a snapshot of the PVC we created for MongoDB.

cat > px-mongo-snap.yaml << EOF
apiVersion: volumesnapshot.external-storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
  name: px-mongo-snapshot
  namespace: default
spec:
  persistentVolumeClaimName: px-mongo-pvc
EOF
$ kubectl create -f px-mongo-snap.yaml
volumesnapshot.volumesnapshot.external-storage.k8s.io "px-mongo-snapshot" created

Verify the creation of volume snapshot.

$ kubectl get volumesnapshot
NAME                AGE
px-mongo-snapshot   1m
$ kubectl get volumesnapshotdatas
NAME                                                       AGE
k8s-volume-snapshot-9e539249-9255-11e8-b018-e2f4b6cbb690   2m

With the snapshot in place, let’s go ahead and delete the database.

$ POD=`kubectl get pods -l app=mongo | grep Running | grep 1/1 | awk '{print $1}'`
$ kubectl exec -it $POD mongo
db.ships.drop()

Since snapshots are just like volumes, we can use it to start a new instance of MongoDB. Let’s create a new instance of MongoDB by restoring the snapshot data.

$ cat > px-mongo-snap-pvc << EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: px-mongo-snap-clone
  annotations:
    snapshot.alpha.kubernetes.io/snapshot: px-mongo-snapshot
spec:
  accessModes:
     - ReadWriteOnce
  storageClassName: stork-snapshot-sc
  resources:
    requests:
      storage: 2Gi
EOF

$ kubectl create -f px-mongo-snap-pvc.yaml
persistentvolumeclaim "px-mongo-snap-clone" created

From the new PVC, we will create a MongoDB pod.

cat < px-mongo-snap-restore.yaml >> EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mongo-snap
spec:
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  replicas: 1
  selector:
    matchLabels:
      app: mongo-snap
  template:
    metadata:
      labels:
        app: mongo-snap
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: px/running
                operator: NotIn
                values:
                - "false"
              - key: px/enabled
                operator: NotIn
                values:
                - "false"
    spec:
      containers:
      - name: mongo
        image: mongo
        imagePullPolicy: "Always"
        ports:
        - containerPort: 27017
        volumeMounts:
        - mountPath: /data/db
          name: mongodb
      volumes:
      - name: mongodb
        persistentVolumeClaim:
          claimName: px-mongo-snap-clone
EOF

$ kubectl create -f px-mongo-snap-restore.yaml
deployment.extensions "mongo-snap" created

Verify that the new pod is in Running state.

$ kubectl get pods -l app=mongo-snap
NAME                         READY     STATUS    RESTARTS   AGE
mongo-snap-6cd7d5b7f-gcrw2   1/1       Running   0          5m

Finally, let’s access the sample data created earlier in the walk-through.

$ POD=`kubectl get pods -l app=mongo-snap | grep Running | grep 1/1 | awk '{print $1}'`
$ kubectl exec -it $POD mongo
MongoDB shell version v4.0.0
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 4.0.0
Welcome to the MongoDB shell.
…..
sdb.ships.find({}, {name:true, _id:false})
{ "name" : "USS Enterprise-D" }
{ "name" : "USS Prometheus" }
{ "name" : "USS Defiant" }
{ "name" : "IKS Buruk" }
{ "name" : "IKS Somraw" }
{ "name" : "Scimitar" }
{ "name" : "Narada" }

Notice that the collection is still there with the data intact. We can also push the snapshot to Amazon S3 if we want to create a Disaster Recovery backup in another Amazon region. Portworx snapshots also work with any S3 compatible object storage, so the backup can go to a different cloud or even an on-premises data center.

Summary

Portworx can be easily deployed with RKE to run stateful workloads in productions on Kubernetes. Through the integration of STORK, DevOps and StorageOps teams can seamlessly run highly available database clusters in Kubernetes. They can perform traditional operations such as volume expansion, snapshots, backup and recovery for the cloud native applications.

Share
Subscribe for Updates

About Us
Portworx is the leader in cloud native storage for containers.

gP_biIhl

Janakiram MSV

Contributor | Certified Kubernetes Administrator (CKA) and Developer (CKAD)
Explore Related Content:
  • databases
  • kubernetes
  • mongodb
  • rancher
  • rancher kubernetes engine
  • RKE
link
Graphic-35
March 19, 2019 How To
How to Run HA MongoDB on IBM Cloud Kubernetes Service
Janakiram MSV
Janakiram MSV
link
Graphic-43
March 25, 2019 How To
How to Run HA PostgreSQL with Rancher Kubernetes Engine
Janakiram MSV
Janakiram MSV
link
Blog Placeholder
October 18, 2018 How To
Kubernetes Tutorial: How to Failover MongoDB on Google Kubernetes Engine (GKE)
Janakiram MSV
Janakiram MSV