Graphic-82

In this webinar, Michael Ferranti of Portworx is joined by Janakiram MSV, a certified Kubernetes application developer, author and analyst to talk about How to Run Databases on Amazon EKS in Production.

Watch this video for a hands-on view of how to solve “Day 2” operations and how to deal with all of the bad things that can happen in production environments and very dynamic cloud environments through a hands-on live demo.

You will also hear what we see from the Portworx perspective on why enterprises are adopting containers, why they’re deciding to run them in production, and how they’re doing that.

TRANSCRIPT:

Michael Ferranti: Okay, great! Hi, everyone, I hope you are doing well. Good morning and good afternoon to folks all over the world. Thanks for joining today’s webinar, Running Production Databases on Amazon EKS. Before we get started, just a little housekeeping. First, please mute your phone or computer’s microphone during the presentation. This will help keep the sound quality high for all of our attendees. If you have any questions, please, please, please enter them in the Q&A box in the webinar interface. We really wanna hear your questions, and we’re gonna review all of them at the end of the presentation, so if you have any questions now, if you have any questions during the presentation, please put them there. Finally, we’re gonna be sending out a follow-up email after the webinar with a link to the on-demand webinar recording, if there’s anybody on your team that you wanna share it with or you wanna watch it a second time. We’ll also send some links to useful resources like white papers and technical reports.

Okay, so let’s go ahead and get started. My name is Michael Ferranti, I’m VP of Product Marketing at Portworx, and I’m joined today by Janakiram MSV, a certified Kubernetes application developer, author and analyst. We are really lucky to have Janakiram here today, because he is an expert on all things Kubernetes, all things containers, all things cloud. You’ve probably read some of his prolific writings on the subject, he writes for publications like The New Stack and for Gigaom, so we’re really lucky to have him. Jani is going to give us a hands-on view of how to solve what you might call day two operations. Right, the name of this webinar is not just How to Run Databases on Amazon EKS, it’s how to run them in production, which means we need to know how to deal with all of the bad things that can happen in production environments and very, very dynamic cloud environments, So, Jani’s gonna walk us through a hands-on live demo of that.

To set the context, I’m gonna do a few minutes of a little bit about what we see from the Portworx perspective, of why enterprises are adopting containers and why they’re deciding to run them in production. And I’m gonna talk a little bit about how they’re doing that, mainly at a conceptual level, as a way to introduce some of the concepts that Jani will dive into in a more comprehensive and hands-on way. Before doing that, though, I want to start with a poll, just to get a sense of how folks are currently using containers in their organization. So I’m just gonna stop sharing my screen for a second, and I’m gonna start a poll, which is basically a simple question, “Are you running stateful apps in containers, in production today?” So just take a minute and answer that question. We’ve got a bunch of participants here, so I’m curious to see what the breakdown is.

Hey, thanks, everyone. If you’ve just joined, we are doing a quick poll to set the stage. Are you currently running stateful apps in containers in production? We’ve got about half of the folks who have answered that poll question, so we’ll just give it another few seconds.

Okay, so it looks like we’ve got about half of the folks have responded, maybe the other half hasn’t been able to find the polling panel. I’ll leave it open for a few minutes in case you find it in the interface. It looks like we’ve got about 35% of those that of our participants are not and about 24% are. So we’ve got a mix. So, I think most people are not yet and so that’s great, I think even for those already are, I think what you’re gonna learn is a lot of best practices for managing some of these failure modes that you might not have run across yet. I think that really is what distinguishes production operations of an application from running it in test dev, which is how many different things can go wrong, between node failures, network partitions, updating affinity settings and things like that and having to figure out how to reschedule your pods. So we’re gonna dive in all that today.

So let me re-share my screen, and go into a little bit of introductory comments. Okay, so the first thing that I like to do when I meet new customers or new potential customers, is really share with them that they are not alone, that there are many, many peer organizations who are already doing the exact thing that they are trying to do. And I really stress this, because I’ve been in the container industry for a while. I’m sure you have been either directly or watching from the sidelines before you decided to take the plunge. And I think DevOps architects, architects, DevOps engineers, software developers, application developers, platform architects etcetera have a really hard job today because there’s so much innovation and there are so many new open source projects, so many new capabilities that come along with platforms that sometimes it’s hard to understand what is great today versus what is gonna be great a year from now or two years from now.

So I wanna start our conversation today from the perspective of someone who’s been watching the container industry evolve over a few years, and I can categorically say, and this is borne out by customers that we talk to, that the state of stateful applications on Kubernetes is much, much different today than it was a year ago, than it was two years ago.

And so the first point I wanna make is that companies like the ones that you see on this screen are running mission-critical stateful applications on Kubernetes in production today. We’re no longer in science, science fair, or a science experiment stage where people are really doing this for real. And so what we’re gonna talk about today and what Jani’s gonna show is really some kind of nuts and bolts failure modes that you need to be able to deal with in production environments. We’re moving past the stage of getting excited by simply being able to deploy a pod and provision a volume that’s a necessary requirement for running a production, but it’s not sufficient. And so I want to show you what all of these other customers have learned about the ability to do just that.

That said, just because you can do something doesn’t mean you should do it. And so I wanna start by before I get into the how Portworx does what it does for Kubernetes. I wanna talk a little bit about why some of these customers are deciding to invest in running stateful applications in production, as well as the stateless parts of their application. One of the first things that they’ve reported to us is containers are a much better way to run efficient infrastructure, it’s really important that we’re efficient with the servers, with the storage, with the networks that we’re using in containers because they’re a lighter weight form of virtualization, help us run more apps on the same hardware. Portworx kind of accelerates those infrastructure cost savings by further reducing compute and storage costs associated with running stateful applications.

A great example of this would be from Beco. Beco is one of our customers who provides an IoT platform for commercial real estate. So imagine you’re a real estate developer and you have a high-rise with three million square footage that you can rent out. You wanna understand the space utilization of every single one of those square foot so you can make the right investment decisions, the right, what’s the right rent per square foot, etcetera. Beco helps with all that.

So they ingest a massive amount of data from all of different properties that they manage and that’s a lot of compute infrastructure. And what Jeff told us is that he’s actually able to run 40% fewer Kafka pods for the same level of reliability than he would be able to do without containers and without Portworx.

We’ve heard customers are able to actually save 66% on their compute costs for a Postgres database or a Mongo database by using Portworx. They can get better write performance, the same level of reliability, and consume a lot fewer compute resources. I’m not gonna get into the details now. But they’re also seeing storage savings of 30% or more per database.

In today’s hyper-competitive environment though I think we all understand that you’re not gonna cost cut your way to innovation and competitiveness. Another thing that we need to do is be able to accelerate how quickly we can build, test, and run new applications. We need to get around that iteration cycle faster, and one of the things that Portworx enables our customers to do is really accelerate the time-to-market for those container projects.

They can go from a year of R&D to weeks for production deployments.

I think a great customer that underscores this is Chris Fairweather. He’s an architect with WCG Solutions. They’re doing some contracting for the US Navy and Chris literally spent a year doing R&D to find a persistent storage for containers solution that was stable, that was mature, that was secure, and that was performant. And he couldn’t find it looking at all of the different options that are available on the market until he found Portworx. And he was able to go from a year in R&D to being able to stand up his service with Portworx, with stateful services, like Postgres, in just a matter of weeks.

And the last point that I’ll make before we start to dive into some of the more technical detail is that Portworx is really a way to avoid vendor lock-in for containerized applications. Now, I’m saying that even within the context of today’s webinar is about how to run databases on Amazon EKS. Portworx loves Amazon. We think Amazon is a great place to run applications. And our customers love Amazon. They also think Amazon is a great place to run applications. The difference is they don’t only wanna run in Amazon which is understandable, right? You have a retirement account, and you diversify those investments because you don’t wanna put all your eggs in one basket.

Enterprises are doing the same thing with their cloud providers. As much as they love any one cloud provider, they wanna be able to diversify their IT portfolio, so they don’t have all their eggs in the Azure basket, or the Amazon basket, or even their own data center basket. So Portworx provides a way to run the same application in a consistent way across any environment, including great environments like Amazon EKS but also Azure Container Service or AKS, or GKE, or OpenShift, or Mesosphere DC/OS, Kubernetes.

Okay, so let’s dive in a little bit more into what Portworx actually does and a little bit of how it does it before I turn it over to Jani. And don’t worry I’m gonna hand over the microphone in about five minutes.

So, what is Portworx? Portworx is a distributed block storage solution that deeply integrates with any scheduler. Today we’re gonna focus on Kubernetes. To run and manage any stateful service, in any cloud, or on premise data center, again, today we’re gonna focus on Amazon, across up to a 1000 nodes. That in a nut shell is what Portworx does. And we’re designed from the ground up. Every ounce of Portworx’ being, was designed from the ground up to run containerized workloads. That means everything that we do is container granular, everything is 100% automatable, controllable via CLI and API. And so if we take a next step, and say, “Okay, how can Portworx help me run a highly performant and resilient PostgreSQL database on Kubernetes?”

We could start to look at the different phases of that deployment and management and I think you’ll start to see what I mean when I say that Portworx was designed from the ground up to manage container workloads. The first thing to point out, and we’re gonna go over this really quickly in the following portion because all of this is very well documented and doesn’t get to the meat. We don’t wanna spend 20 minutes just showing you how to install Portworx. And, in fact, it’s incredibly easy. And so there’s really, in some ways, not much to show. This is how you install.

This command is how you install Portworx. kubectl apply and a URL with a bunch of different configuration options. We have a spec generator for this URL to compose it for you. If you just go to install at portworx.com, you’ll see the different configuration options. You run this on each of your pods and now Portworx automatically inspects and fingerprints all of the storage available to the host that you run this command on and turns it into a single cluster-wide storage fabric where you can deploy and run stateful applications. Once we’ve installed Portworx with that command we can start to think about now running our actual stateful applications.

One of the unique things about Portworx is that virtually everything that you can do with Portworx, you can do via kubectl. This is a really important user experience point for us as a company, as developers who like working with computers and like an elegant experience. We didn’t wanna make people go to a second interface in order to do everything storage related and then use the Kubernetes interface to do anything compute related.

We wanted to provide a consistent experience for the entire application. And so we invested in making all of the Portworx features available via kubectl. And so one example of that is Kubernetes’ primitive storage class. A storage class basically defines the type of storage resources that you want a particular application to consume. So that would include things like how big a volume, but it would also include things like, how many replicas do I wanna have of each of those volumes? What’s the IO priority of this volume? Do I want a snapshot schedule? Different things like this. So here I have a very simplified view of a storage class that specifies a replication factor of two. What this is saying, declaratively is, anytime an application gets storage of this type, make sure you always keep two copies of it Portworx. And we’re gonna see why this is really important both in the next couple of slides and then in the live demo that Janakiram is gonna share with us.

Once I’ve created a storage class, it’s very easy to create what is called a persistent volume claim. And again, we’re gonna go into more detail and you’re gonna see it in action in a minute. And now, all of a sudden Kubernetes has deployed a PostgreSQL pod. Portworx has provisioned dynamically, without putting in a ticket, without waiting for a block device to mount, a volume that’s been mounted into that PostgreSQL container. Now, the replication factor that I showed earlier becomes really, really important for production operations. It’s kind of a key component of how Portworx enables production ops of databases, not simply deployments of databases. So, one of the things that Portworx will do is create what we call a topology aware replica. A RepPod up-to-date, synchronously replicated copy of my Postgres volume, placed on some other node in my cluster, that respects the availabilities and boundaries of your cloud provider. And I realize that on this slide it says DC/OS and DC/OS also supports Kubernetes.

So, I apologize for the mistake in that slide. But this is all looking at how it works on Kubernetes. So, to understand why this is important, I think it’s important to understand a key feature of Amazon EKS, which is multi availability zone awareness. So, by default, when you deploy an EKS cluster in Amazon, your Kubernetes cluster is gonna be spread across multiple availability zones within Amazon. AZ within a cloud provider, an extremely important concept. We all know that there have been major outages within cloud providers. And one of the ways in which cloud providers provide you resiliency is by saying, “Well, you don’t have to run your application only in a single availability zone. You should actually design it to run in multiple availability zones.” Kubernetes makes deploying multi AZ applications much easier, but there’s a catch when it comes to stateful applications like databases.

Amazon EBS only works within a single availability zone. So if you have Kubernetes running across availability zones, but then have for instance a Postgres database, if you want to fail over Postgres to a second availability zone, you’re not gonna be able to use your existing data volume, you’re basically gonna have to recreate that data from scratch, which is both error prone and time consuming. And all of that leads to application down time. What Portworx does on the other hand, is by creating an up-to-date replica on some other node in the cluster in some other availability zone. If we ever lose, for instance, node three, if we lose the EC2 incidence that our Postgres pod is running on, or say there’s a network partition and that node is no longer available, we can use Kubernetes to automatically reschedule that Postgres pod to a host in the cluster that already has a copy of the data.

Portworx in effect, can ensure that you can fail over to another node in your cluster, and that the pod continues to be hyperconverged with the data. Meaning, that the pod runs on the host that actually has a local copy of the data that’s important for fast IO.

Two more features that I want to highlight before turning it over to Janakiram. Often in reality, in production, you wanna have both a local HA story and a disaster recovery plan. Right? You don’t wanna have your disaster recovery site in your same data center as your local environment. If you lose your local environment, you lose your DR site as well. So often that DR site is gonna be at a location with a network latency, that’s too great for synchronous replication. You know, while technically possible, your application level SLAs are not gonna be met if we’re locking databases while rights are synced across the country.

So in that case, we have a feature that we call Cloud Snap which enables you take a snapshot of one or more container volumes, push those volumes to an S3 or any S3 compatible object store and then pull down into another environment. This is essentially the way that you can do DR for containerized applications and really, really importantly, these snapshots are all container granular. We’re not snapshotting an entire, for instance, EVS volume where you might have many containers running. And then you have to abstract the individual container data from that larger volume. All of these volumes are container granular which means that there’s no rehydration time once you spin up a new pod and mount that volume in it.

Lastly, we support “bring your own key” encryption. This is important for security conscious applications, or if you are creating a multi-tenant platform. All of these keys are at the container volume level. So you can actually have different keys for different container volumes, all on, for instance, the same EVS volume. So Portworx can carve up an EVS volume into 10 different container volumes and each can have their own key. And those keys are only controlled by you, the customer, and carry with the volume as it moves around the cluster, and as it moves around between environments.

And with that, I’d like to turn it over to Janakiram who is going to show us a live demo of deploying HA PostgreSQL on Amazon EKS. I’m gonna make you the presenter now, Jani.

Okay. You should be the presenter. Take it away.

Janakiram MSV: Cool. Alright, thanks Michael, that was a pretty good introduction to the Portworx platform and also how to run stateful workloads on Kubernetes. So as Michael mentioned my name is Janakiram. I am very passionate about Kubernetes and I am very excited to walk you through the steps involved in setting up a HA cluster of Postgres running on EKS backed by Portworx. So before I get to the demo, let me set the stage, let me walk you through the environment. So obviously this entire session is focused on Amazon EKS. So I have gone ahead and provisioned a three node cluster on Amazon EKS, you’ll see that I have two clusters running, but I am going to use one for the demo. So this basically translates to a set of worker nodes because we are running two clusters of three nodes each. You actually see that I have about six EC2 instances running.

I’m not going to spend a lot of time on how to configure, how to install or how to provision EKS clusters. That is well documented, you can follow Amazon’s starter, the guide to getting started with EKS that is very comprehensive. So once we have the cluster up and running, then I have also configured Portworx. This is as Michael mentioned, is fairly well documented and easy to get started with. There are some prerequisites, like you need to have an HD cluster running, you need to open up a few pods between the nodes, that is fairly easy. You also need to attach an IAM row to an existing EKS IAM row. That is also pretty straightforward if you are familiar with IAM and AWS. And once you have the prerequisites met, you can use what is called as the spec generator, this is a wizard style configuration which will help you generate the artifacts required to deploy Portworx in any Kubernetes cluster.

So as you notice, there are things like the end point for HD cluster. You can also use console and maybe other. But at this point HD is the preferred one and then there are a few more parameters. For example, if you are using a managed Kubernetes offering like AKS or EKS, you’ve got to click this check box and then choose the secret’s type. As Michael mentioned, there are a few choices that you can make here and then this is going to generate a simple URL, which is going to be used for deploying Portworx. Again, the documentation is very comprehensive, you can follow the step-by-step guide and in a few minutes you’ll have a full-fledged Kubernetes cluster along with Portworx up and running.

So, that’s the background and the context. Now let me jump to the demo. So I obviously have a three node cluster running as I mentioned, and this is running in, I think Europe West, so it’s going to be a little slow. Bear with me because EKS is available only in certain regions initially. So we have a three node cluster and I also have Portworx up and running. So how do I make sure Portworx is available? So I am going to do “get pods” on the kube-system namespace, and that’s going to show us a variety of daemon sets, controllers and pods.

So you’ll notice that apart from the plain vanilla daemon sets and the processes that Kubernetes runs, you’ll notice that that there are AWS specific daemon sets. This is very specific to running EKS and then there is an HD operator which is a prerequisite for Portworx and after that we have a STORK scheduler, a STORK controller and Portworx itself running as a daemon set and then there’s a PVC controller. Now all of them are completely automated in terms of installation experience. You don’t really need to worry about it, but I’m showing you to ensure that we have Portworx installed. And this is exclusive to the kube-system namespace.

In the normal default namespace, you don’t see anything. So it’s a one time set up to make sure that Portworx is installed on top of an existing Kubernetes cluster. So with that, let me walk you through these steps. So I am going to take help of a script file, which is going to be made available to you. In fact, it’s already there. I’ll talk more about it towards the end of the demo. So we verified the node, we verified the Portworx pods up and running.

Now Portworx comes with a specific CLI, a command line interface, to basically manage the entire storage cluster. So we first grab the pod that is responsible for the Portworx daemon set and you actually do an echo PX_POD. This shows us one of the Portworx daemon set pods running in the kube-system namespace, and once with that in place, I can go ahead and exec pxctl status and this is going to show us very useful information. So what we are basically doing is we are accessing the pxctl running on the node via the PX pod. So, this has a lot of information and seeing this will give you an assurance that Portworx is perfectly installed and it is up and running.

So it tells us what is the level of storage that we have, and all the priority levels. The storage class that we have installed and how the EBS volumes are actually mapped to individual Portworx nodes. So obviously the block storage given by cloud provider like Amazon is leveraged by Portworx to bring a distributed block storage.

So obviously when we set up Portworx, thanks to the dynamic provisioning, what it basically does is it goes ahead and mounts three different EBS volumes to each of the nodes, and then it aggregates and there’s a pool of those block storage volumes to give us an aggregate storage capacity. So since I have attached or have requested for 100 GB of block storage and we have three nodes, we notice that the total capacity that’s available to us is 300 and out of that 4.6 is already used by Portworx, maybe because of the other workload that I have run, or even the initial set up will take some space, but seeing this is a good indication that Portworx is perfectly installed and we’re all set to deploy our stateful application. So with that in place, the very first step is creating a storage class. Now a storage class is basically like a driver for Kubernetes. It is also called profiles, now if you are using traditional storage mechanism like NetApp, it’s a profile.

For example in administrator where we find that this department or this set of users will have access to your profile, which is based on magnetic disk which is giving us low IO and it is okay for these users to utilize x GB of capacity. So that’s the profile using traditional storage environments. The equivalent of the profile in Kubernetes is called as a storage class and the storage class is an intermediary between the underlying infrastructure and the application, it basically acts like a driver facilitating the interaction between the underlying IO and the actual apps. So we go ahead and create a storage class and while this is being created I wanna show you what that is?

So this is a pretty straightforward definition of a storage class. We are saying create a storage class based on Portworx volume and by the way this ships throughout the box… Kubernetes, the upstream vanilla distribution has Portworx volume as a preferred storage class. So you can go ahead and use it even if you are using an upstream Kubernetes from GitHub. So, I can also define like the I/O profile. Ideally, for production work loads we need to define the kind of I/O profiles we’re using, typically high would make more sense depending on throughput and I/Os that you need. But at the minimum, we need to mention the replication factor. And in this case, I have mentioned the replication factor as “three” which means, whatever data is retained to one of the nodes would automatically get replicated across the other two.

So at any given point of time, Portworx will ensure that the data is replicated at least in three locations. And this is very good for multi-AZ deployments like Michael mentioned. And it also overcomes the classic limitation of EBS becoming confined to one specific AZ. So this is the basic expectation of creating a HA deployment. So we create that storage class and now it’s available. We can verify the storage class by using “kubectl get sc” and this is going to confirm that we have the PX rep to 3 storage class. Perfect.

So with that in place, we can go ahead and create a PVC. A PVC is the persistent volume control. So if you are not very familiar with the storage terminology and the nomenclature of Kubernetes, basically, just like an administrator creates a virtual machine and hands it over to a user. In traditional environment, someone has to provision storage and make a chunk out of it and hand it over to a set of users. How do you define that in a container native and cloud native environment? Well, to facilitate that and to replicate the work flow involved in creating a compute resource and handing it over, Kubernetes includes the concept of a persistent volume claim and a persistent volume.

So the way it works is, let’s say, there is a department where there is 100 GB of storage that’s already available, a hypothetical number, it could be one TB or anything. So, the administrator would actually create a persistent volume with the maximum available storage. And that is going to be the entire capacity available to individual users or departments or groups. And once that persistent volume is created individual users come and they claim… They go ahead and claim the volume. So for example, if there is 100 GB one user will claim 20 GB the other will claim 20, till the actual volume is exhausted, or there is no more space available. After that, creating a claim will throw an error and it fails. But as long as there’s enough capacity and the policy is in place, and it allows users to claim storage they can go ahead and create a PVC. So the relationship between PV and PVC is like a node and a pod. A node is giving compute resources and a pod is consuming the compute resources. Similarly, PV exposes storage resources and PVC consumes this storage.

Because we use dynamic provisioning with Portworx, we need not even create the persistent volume. It is going to be created automatically because of a concept called “Dynamic provisioning”. And again, dynamic provisioning is highly based on the storage class definition. So here, we are defining the PVC. As you notice the way the PVC is going to be associated is to the storage class we created. Because this is already in place we need not create a persistent volume before hand. When we run this, Portworx will automatically provision the one GB claim and it can be straightaway used by one of the pods. So let’s go again, run this.

So once we create the PVC, we are essentially asking Portworx to allocate one GB from the aggregate pool of storage resources we have. Is one GB sufficient? We are not sure but the good news is, later on, when we really need to expand we can absolutely do that. So we can start with any arbitrary number and eventually expand the storage based on our needs.

Oops… Okay, this is happening occasionally, particularly when I have network issues. So there we go, now when I actually do get PVC we’ll notice that this is currently in pending state. And in just a few seconds or may be a minute this is going to be bound. And by bound what I mean is, it’s going to really have a consumable storage resource available, and this is going to help us. There we go. So now we have a volume that can be directly used by a pod. So what we have done so far is, we have created a storage class and we associated that storage class with our PVC and looking at creating a volume claim, and we’re all set to launch the actual stateful work load.

In this case, it’s a Postgres database and that’s going to rely on the PVC that we just created. So here is how I’m going to create the PostgreSQL database and I’ll scroll down to show you how I’m going to associate this pod with the PVC that we created in the previous step. For that, all I’m expected to do is, add this persistent volume claim. Claim name is PX-Postgres-PVC and where is this coming from? Essentially this is what we define here. So if you carefully notice there is a very nice relationship between the storage class the PVC and the pod, and that’s how it all cascades. And eventually the storage becomes visible to the pod.

So this is a very straightforward Postgres pod, and if you notice, we are launching just one sense of the replica set of the deployment. We are not doing a stateful set or we are not doing anything complex to ensure we have multiple nodes of Postgres we don’t really need it because then the data that is running… When the, when the storage cluster running beneath the workload is taking care of the replication and HA, the work load can be treated almost like stateless. That’s the beauty of the separation of concerns between the workload and the storage, the storage HA is ensured through post Portworx while the workload is managed by Kubernetes like any other deployment.

So there is a very well defined association between these two, which is managed by Portworx. So that’s the reason why I have just defined this like any other traditional deployment of Kubernetes without turning it into a stateful set or doing anything complex to insure my database is HA, the database is not really HA but the data is HA. That’s the beauty of this topology. Excellent. So, now it’s time for me to actually create the database but before that I need to make sure that we have the secret in place and if you’re familiar with Kubernetes, the secret is the way we, we inject the sensitive data into a Kubernetes cluster and then write by appropriate applications.

So, I created a text file called password.txt and I’m going to create a secret from the password file, so this is going to have a secret created in the name space default and we are going to use this for our deployment. Perfect, so with that in place I can now go ahead and launch the Postgres workload. This is creating the Postgres pod, which is a part of the deployment and let’s put this in watch mode because it’s going to take a while the container has to be… because I have done pre-caching of the container image, it’s already in running mode and that’s it. Now we have launched the Postgres pod, when we do a get pod we’ll see there is Postgres that’s running, which is fantastic and in no time, we have a HA database up and running.

Now, let’s go ahead and do some real world operations to it, but before that I want to show you some intricacies of managing the Portworx volume. So for that, what we’ll actually do is to get the volume that is associated with pod. So I’m using the kubectl and a couple of shell commands to grab the volume that is attached the PVC, so we grab that and then we use that to basically access the PXCTL, which is running one of the nodes. Now, we are grabbing the pod that is running the Portworx daemon set and the volume name and the pod name, we are all set to execute volume inspect. So why did we run these two commands because we want to grab the volume associated and the pod is actually running the Portworx daemon set.

This step will show us some interesting details about the volume that we just created. So, here you notice that we created a 1GB volume and it is given this dynamic name because of the dynamic provisioning and then it is currently attached to one of the nodes, this is the EKS node name and it is replicated across three different nodes of the cluster and these are all internal IP addresses and the replication status is up. Again, this is another indication that everything is in place. So, with replication running and the Postgres app running, we need to now go ahead and populate some sample data. So it’s time for me to again run the commands to basically grab the pod name of the Postgres and then exec into it, so that we can access the shell and then gain access to the Postgres data that is running inside that pod.

So now we are inside the pod and we can exclude all the Postgres commands. So what I’m going to do now is to run the psql client. And we are right inside the Postgres shell and if you actually notice there are a few databases that are already available. Now let me get out of the environment and then use the pgbench command, to basically ingest about 5 million rows. So I’m going to run this command, which is going to… Oops I need to create the… I missed a step, so let me, was going to create this and now I can execute this command to ingest a lot of rows, so it’s going to take a bit of time.

So essentially, we are populating the sample database with a lot of rows so that we have some significant amount of data to deal with.

Alright! We should be done in just a minute or so. And after that we can check the count of the records. After that, I’m going to do something very interesting. I’m going to simulate a pod failure. We’ll also kill the pod and then bring it back with the data intact. So let’s wait for this to finish.

Alright! So now when I actually get into the database and look at the tables, so we have the pgbench account which is what we are… We have populated with a bulk data set. So now this is going to show us we have a lot of records that is very good. Now, let’s get it off the psql environment and also out of the pod and now we are back to our work station. And what I’m going to do now is to grab the node on which our Postgres pod’s running. So this is going to get us the actual node on which the pod is provisioned. So this is the node name, which is hosting the Postgres pod that we have been dealing with. Now, what we’ll actually do is to use an operation called cordon. So in Kubernetes cordoning a node is making sure that no new pods are scheduled on it. It literally becomes unavailable, so that the master controller cannot even see the node for provisioning, for scheduling. So it basically becomes unavailable for any new pods to get scheduled on it. So that is what is called cordoning.

So we are cordoning it off to make sure that the new pod that we are going to create will not land on the same node. That’s the reason why we are getting it out of the cluster. So, now when I actually do “kubectl get nodes” we’ll see that the scheduling is disabled on this node and any new pod that is going to be taken will land in one of these nodes but never on this because it says scheduling disabled. And this is going to be honored by the master and will never try to provision or schedule a pod on this node. So once we do that, then I’m going to get the pod name just to make sure we get the dynamic pod name. So this is basically the pod responsible for running Postgres. So sorry. So this is the Postgres pod. And now, I’m gonna do something very crazy. I’m going to go ahead and delete this pod. So, technically we are not going to have any pod that’s running this work load but because it is actually a deployment with replication count as one, Kubernetes controller will immediately create a new one. And if you see it was created just 12 seconds ago.

That means as soon as the original pod got killed because it’s a part of the deployment and the min and max count of that has always been one, Kubernetes controller and the scheduler has gone ahead and… They have gone ahead and created a new pod. But let me also make sure we un-cordon the node because it’s anyways not placed on the same node. So, now we are ready to test the fail over. So technically we kill the pod and now there’s a new pod running and it is landed, it has landed on one of the other nodes. So, it’s a very different pod from the previous one that we dealt with. We’ll exec into it. Right! And then if you actually look at the tables, there we go, everything is intact. And we can also do the count of the rows and it is exactly the same as the previous. This is after we have simulated a node failure through cordoning and after we have physically deleted the pod and now we have the data intact.

And this is without dealing with the HA of the database itself. Remember, we haven’t configured Postgres for HA. Instead Portworx storage fabric that is running beneath the workload is doing the magic here. It has replicated the data so that it is completely independent and autonomous of the work load and all Kubernetes did was to provision and schedule a new node and because of Portworx volume, it automatically sees the same data without any additional configuration. So this is basically doing a fail over and recovering the data even after deleting it physically. Fantastic! So that’s the first part of the demo. And I have a couple of more scenarios to talk about. The first one of course, the most, the nightmare for any DBA or DevOps engineer is dealing with a deleted production database and I’ve just shown you how we can get in short, if for those scenarios by switching to a storage fabric like Portworx.

If that is one of the most critical use cases there is another use case where you typically run out of space, how many times you have realized that no additional rows can be added because the volume is almost… No space is left in the volume and it has reached its maximum capacity. And in typical cloud environments, it is possible for you to attach a new EBS volume and quickly mount it and create a snapshot and launch a new EBS volume with an expanded capacity. But how do you do it for container native and cloud native workloads? Well, you are covered because you can do exactly the same thing with Portworx.

So what I’m going to do now is to basically ingest more number of rows and simulate a scenario where we’ll run out of space. So, let me move to the second scenario. So now we are again, grabbing the port that is running the Postgres workload, we’ll get into that pod and then we will basically double the number of records. So, now this is going to take slightly longer, but that’s fine and that, it should actually throw some errors because this would simulate ingesting so much of data that we’ll run out of space. So, at some point we’ll see some errors, some warnings and we’ll get past that to expand the volume and to make sure, we are never facing an issue with the production database.

Again, give me just a minute here, while we are populating this with some dummy data.

Okay. So, this is exactly what I wanted to see… You’ll notice that there are a lot of errors and there is the most important error, “could not write to file, no space left on device”. This is a very common scenario that EBS faces. There is absolutely no space left to ingest additional data and now what I’m going to do is I will SSH into one of the nodes because I want to access the pxctl CLI and really check where we are in terms of the storage utilization. So, now I’m inside the EKS node and I’m going to basically get the pod, and the volume details and I’m going to run this command which is invoking pxctl with the volume information and passing the pod name will automatically show us the volume details and there we go.

Now 944 MB out of 1 GB is already consumed, which means we are not left with any more space and this is a big problem. How do we now expand this? Thankfully, there is a single command that we can execute, so on the same volume I’m going to basically make the size as 2X. So when we say, size equal to 2, we are increasing the size by 2 GB. So now the new volume size would be 2 GB which is 2X of what we have done, and I’m gonna go back and look at the status, we’ll now notice that the size of the volume is 2 GB and this is green. Earlier we have noticed that this was red indicating we are very close to running out of the capacity and now it is all green, indicating this space is now available.

So this is again, another operation which is very common to storage administrators and DBAs. Excellent. So now, finally we are running out of time but I’ll quickly walk you through the steps involved in, how to create a snapshot and recovering. So, what I’m going to do now is to create a snapshot and if you’re again, familiar with cloud block storage, you know that major cloud providers support the concept of snapshotting. So, what I’m going to do here is to basically create a new snapshot based volume. So, I’m gonna show you the YAML file, you’ll understand… So now we are actually creating a new type called volume snapshot and we are associating this with the Postgres PVC that we have originally created which means we are asking the Portworx runtime to basically create a snapshot from the PVC that we have created earlier. So, once that is in place I can go ahead and delete the database.

So again, we are getting into the pod to drop the database. Okay, so when we actually look at the… Okay, so when we look at the database we have the PX Demo. Now, I’m actually going to do something again, crazy. I’m going to go ahead and drop that database. So, now the database is dropped and I can exit this. Now what I’m going to do is to basically create a new PVC from the snapshot. So again, when you are creating a PVC which is the Persistent Volume Claim, there is a mechanism for you to point this PVC to an existing snapshot. So, this annotation that says, “snapshot.kubernetes.io”, a long string, which is an annotation, and we are pointing this to the snapshot that we have created earlier. And now, instead of creating a blank volume, what Portworx would do is to create a PVC based on this snapshot.

So this is now going to create a volume from the previous snapshot that we have created. And after that, I’m going to launch my database all over again, and this time I pass a different YAML file, and we can also take a look at the YAML file. So now, this is again, the standard Postgres deployment with one replica, and here I’m saying the claim name for my persistent volume is px-postgres-snap-clone. Basically this is, again, pointing to what we have created here. So px-postgres-snap-clone. And now we are asking Kubernetes to create a new volume and mount it as the data directory for Postgres. But instead of creating a blank one, we are pointing it to the restored snapshot.

That’s it. So now, the new application is created, new deployment. We’ll again get into the database and check the availability of the data that we ingested in the previous step. Bear with me as I get into the database. Okay, so now, we have the pgbench. We can, again, look at the count, and it’s going to be exactly what we ingested in the previous step. Basically we are back to the same state before deleting this workload. There we go. Now everything is in place. That’s the beauty of taking a snapshot and restoring it, without any spec, without going through any complex operations. The same workflow that you are familiar with, EBS or Azure disks or persistent disks in GC, you can pretty much apply that to a cloud-native storage based on Portworx.

That brings us to the end of this demo. We recently published an article, an end-to-end blog post, and I want to quickly point you to that. All the steps that I showed you are available here and this is very easy to access, it is portworx.com/postgres-amazon-eks so you can always access. Michael, or what do you want? You can bring up that slide which has the URL.

Michael Ferranti: Yes. Hi. Thanks Jani. Okay, so I’m going to share my screen and you’ll get the URL. You can also just Google EKS Postgres and you should find that blog at the top of the search results. While I do that, please, if you have any questions for Janakiram or myself, please put them in the Q&A box. And I can see we have one question from Ron, so we’ll get to that in a second. Thank you very much Jani that was very enlightening as always. And so while I share my screen for that URL, Ron just asked a question, “What kinds of databases do you see people running on Amazon? Is it just Postgres?” Do you have any insights into that, Jani?

Janakiram MSV: Yeah, so there is no definitive answer to that. I never got a chance to look at a survey or an official report from Amazon, but the usual suspects like MySQL, Postgres, and even some of the big data workloads based on HDFS, HBase, Jupyter Notebooks, some of them are becoming very popular, particularly with the intersection of Kubernetes and machine learning workflows. And I’m seeing customers very keen to run some of those, excuse me, HDFS, HBase like workloads, but typically the top three or four databases that we encounter in the cloud are MySQL, Postgres, MongoDB, Cassandra. I know those are the top four. And now SQL Server for Linux is also becoming popular, I think that is the fifth candidate.

Michael Ferranti: Yeah, I agree with all that. From the Portworx perspective, the big five that we see, frankly in Amazon as well as in other clouds and even on-premise, so I would say maybe we could generalize the question to what databases do you see people running in containers? I would say it’s MySQL, Postgres, Cassandra, Kafka, and Elasticsearch. Those are really the big ones, but we see lots of other things as well as AI comes along we see a lot of TensorFlow being containerized. We see HDFS, Spark workloads running in containers. Really, it runs the gamut. And I think that’s what is so powerful for containers, is really you can run anything that needs to run on a server can run in a container and you can take advantage of the automation benefits that come with Kubernetes. Okay, let’s see, do we have any other questions? So I don’t see any other questions from the audience, but Jani, I have a question for you, given that you do so much writing and you do so much research and working with various companies. What advice would you give to people who want to put their toe in the water running stateful containers?

Janakiram MSV: Right, so get familiar with the way Kubernetes handles storage. I think there is a leap of faith when we move from VMs to containers, and what I’ve seen from my experience is, customers find it a bit hard to map what they know with the concepts of Kubernetes. So it’s important for customers to learn the basics of storage. For example, they should understand what is storage class, what is a persistent volume, what is a persistent volume claim and how pods get associated with them and what does a typical life cycle look like? So, getting familiar with the terminology and the nomenclature and the work flow is very, very critical, number one.

Number two, the next step is basically pod some of your non-critical workloads and get familiar with the types of storages that are available from simple host based, there are storage like mkdir, hostdir to a more enterprise offerings like Portworx. So I think the Portworx offering is available as a 30-day trial, so you can go ahead and try it without really signing up or even paying, so that’s a great first step. You can actually launch Portworx and move away from more primitive storage backends like mkdir or NFS or Gluster and then use Portworx for high throughput and production databases.

And the third one, the third step is basically moving from just the stateless world of Kubernetes to running one or two workloads… One or two stateful workloads on Kubernetes and carefully monitoring it. So that is the process that I would say. Just to recap first of all, get familiar with the concepts and the work flow of handling storage on Kubernetes. Second thing is understand how to run stateful workloads with some of the primitive storage backends. Third step is pick an enterprise offering, like Portworx and move one or two mission-critical workloads and constantly monitor and get familiar with it and then go live with your full-fledged workloads.

Michael Ferranti: Okay, great! I think that was a great explanation and looks like we’ve got another question that just came in from Jack which is, does running databases in containers for large-scale application make sense? I’m paraphrasing there. So Jani, do you have a perspective on that? I certainly do and I can share that, but I think the attendees would love to hear from you. So, is running databases for large scale applications worth it in containers?

Janakiram MSV: Absolutely. So Kubernetes is currently in version 1.10, and I think till about version 1.6 there has been a lot of resistance and even the cloud providers and some of the creators of Kubernetes didn’t really encourage running stateful workloads on Kubernetes. But for the last four versions or so, since the introduction of stateful sets and integrating some of these storage classes natively, like Portworx, has made Kubernetes a preferred environment to run stateful workloads along with stateless. So this is a new area, you don’t have a lot of case studies, you don’t have a lot of documented evidences and run books to explain you what it takes to run mission-critical large scale databases on Kubernetes but that is definitely improving.

I don’t know what is the scale when you say it’s large scale, but it is definitely possible for you to run traditional database workloads in HA mode with all the primitives and all the standard storage ops in place. What I walked you through is basically those, the failover part, the snapshot and restore part and then expanding the volumes and shrinking the volumes. All of them are possible, and this is going to get more and more mature as Kubernetes progresses and as platform providers like Portworx keep innovating and increasing their investment in that.

Michael Ferranti: Yeah, I agree 100%. I think one thing that I would add to that is maybe flip the question a little bit on its head and say, if it’s a large scale important application, you should definitely run it in containers which of course, I would say that, but what’s behind that comment? What’s behind that comment are customers that we have like NIO who are a self-driving car company, an autonomous vehicle company, so you can see that this company is at the cutting edge of an entire new technology category. The key to their survival is two-fold; one, it’s just a blazing fast pace of innovation, right? There are many, many… All of the major car companies have an autonomous vehicle program and they need to out-compete much bigger and better funding companies than they do. But these are also cars that carry people and so the reliability of those systems is much closer to an airline than it is to a traditional web application, a traditional Google.

If Google search isn’t working, you don’t run into a tree, whereas if your autonomous vehicle does, there are real-life consequences to that. And NIO chose containers because it is the easiest way for them to quickly iterate on their application to try new things, to deploy new applications with new feature branches to respond for instance, to security failures where they have to re-deploy all of their applications so they can patch OSs. Containers are the easiest way to do that. And at the same time, it allows them to manage big data applications, machine learning algorithms, it allows them to do batch work loads or data processing, in a much easier and scalable manner across different environments.

So, containers are new and thus scary, but I think as people learn the value of containers and how it helps them both be more efficient with their infrastructure, but also to move faster, get to production quicker and to respond in a more automated way for outages or failures that happen in large complex distributed systems, I think this question, which is a very good one, I’m not being dismissive of it, in a couple of years, we will start to say, “Why weren’t we doing it previously?”

It would be a little bit like if we went back 10 years and said, “Should I run databases on VMware or should I keep them on my bare metal servers?” That question today wouldn’t make a whole lot of sense, and if people didn’t run databases in VMs, then this webinar today is about Amazon, Amazon as a company would not exist. Amazon Web Services, excuse me, because 100% of Amazon Web Services is running in virtualization and I think containers are going to be a similar deployment mechanism for all types of applications, no matter how mission critical and no matter how large scale.

So that is it for our questions. Thank you very much, Jack. Thank you, Ron, for asking questions. Everybody else, we will be sending out an email tomorrow with a copy of the recording, so if you want to share it with anybody else in your team or give it another listen, we are happy to share that you and as always, if you have any other questions, just ping us on at info@portworx.com. You can go to our website, if you wanna see a demo of Portworx or talk a little bit more about your use case. There’s a button at the top of the page that says “request a demo.”. If you put some information in about your use case, we’re happy to reach out. And thank you again for attending and hope to do it again sometime. Thank you very much.

Share
Subscribe for Updates

About Us
Portworx is the leader in cloud native storage for containers.

Janakiram

Janakiram MSV

Contributor | Certified Kubernetes Administrator (CKA) and Developer (CKAD)
Explore Related Content:
  • eks
  • kubernetes
  • postgresql
link
px_containers
April 3, 2023 How To
Run Kafka on Kubernetes with Portworx Data Services
Eric Shanks
Eric Shanks
link
Kubernetes
March 15, 2023 How To
Kubernetes Automated Data Protection with Portworx Backup
Jeff Chen
Jeff Chen
link
shutterstock
December 15, 2022 How To
Using REST APIs for Portworx Data Services
Bhavin Shah
Bhavin Shah