Portworx & Red Hat Hands-on Labs Register Now

Twitter

This tutorial is a walk-through of how to expand a PostgreSQL persistent volume with no downtime on Google Kubernetes Engine (GKE) by Certified Kubernetes Administrator (CKA) and Application Developer (CKAD) Janakiram MSV.

TRANSCRIPT:

Janakiram MSV: Hi. In this demo, I want to show you how to perform storage ops on a Portworx cluster running a PostgreSQL database workload. We are going to expand the volume backing our PostgreSQL database and we’re gonna do that on the fly without any downtime.

Before we get started, let’s inspect the current environment. So we are going to look at the volumes created by Portworx. So I’m going to grab the PVC name and the volume associated with it. So when we look at the volume, this is created dynamically by Portworx based on the PVC that we created in the previous demo. Then I’m going to execute pxctl command available within the Portworx pods that are deployed as a part of the Daemon set. So when we run “pxctl volume inspect” to inspect the volume created through our PVC, we’ll get to know the current size and the configuration.

So here, we have 1 GB volume with a replication factor set to three and currently about 863 MB is already consumed. What I’m going to do now is to run a set of commands which will ingest more data into our PostgreSQL database, which will result in shortage of space, and that will in turn crash our database because of lack of space. Then we are going to see how we can use pxctl to dynamically expand the size to double it to expand the volume on the fly. So let’s get started.

Before we do anything further, we’re going to grab the pod name, running the Postgres database. So, in fact, we only have one pod in our default namespace so that is the pod responsible for running Postgres database. And then we are going to get into the shell of this, then invoke pgbench command to ingest large amounts of data. So, this is going to ingest data into the database, which is already at the brim of its storage space. We have noticed that we created 1 GB of volume and we are consuming about 800 MB of that volume and now we are actually trying to ingest a pretty large data set and this sure result in a failure because of the unavailability of the disk space. And there we go, this is a very common problem faced by DBS, and if we carefully inspect the errors, it shows that it couldn’t write to file, no space left on device. I’m not surprised because this is how Postgres will fail when it encounters issues with available storage space.

Now, let’s go ahead and inspect the volume. So again, we are going to run pxctl and this is going to show us the available volume space. And this is red, indicating that it’s almost consuming 944 MB or of 1 GB space. Now, this is not a good sign, particularly when you are running a production system. So let’s see how we can expand this file system with no downtime and no disruption to the production workload. So to do that, we are going to SSH into the node. So I’m going to grab the node on which we are running Postgres database so this will point us to the node from the available cluster. So “kubectl get nodes”, looks like the third node is where our pod is running, so we will SSH into that node using the standard SSH command or cloud provider specific command. So as soon as we SSH into the node, we’ll try to grab the volume name associated with our pod.

So now when we actually run this, it shows this is the volume that Postgres is currently running on, created by Portworx. Now, let’s inspect this volume with pxctl command. And as we have seen, it’s almost at the brim of the storage space. We cannot ingest any more data because we are running out of space. Now I’m going to invoke one command, which is update followed by the volume name followed by size equal to two, which means we are now expanding this volume from 1 GB to 2 GB. So let’s go ahead and run this. Now we noticed that Portworx is responding with volume update, successful for volume, volume name or volume number, and this will ensure that our volume is now 2 GB and everything is back in green because we have plenty of space to ingest additional data.

We can actually increase the size of this volume to any factor provided you have the SSD backend or the cloud-specific storage backend attached to your volume, to your node. And you can dynamically, on the fly, expand the storage. So, that was a storage operation performed on Portworx cluster running a PostgreSQL database.

Thanks for watching.

Share
Subscribe for Updates

About Us
Portworx is the leader in cloud native storage for containers.

Janakiram MSV

Janakiram MSV

Contributor | Certified Kubernetes Administrator (CKA) and Developer (CKAD)
link
kubernetes
May 29, 2018 How To
HA PostgreSQL Guide for Kubernetes Deployment
Janakiram MSV
Janakiram MSV
link
Twitter
September 28, 2018 How To
Kubernetes Tutorial: How to Failover PostgreSQL on Google Kubernetes Engine (GKE)
Janakiram MSV
Janakiram MSV
link
postgreSQL
June 13, 2018 Technical Insights
High Availability PostgreSQL on Azure AKS
Janakiram MSV
Janakiram MSV