Portworx Guided Hands On-Labs. Register Now
In this AWS EKoSystem Day demo, Portworx CTO and Co-Founder Gou Rao shows how teams can quickly and easily run stateful applications in EKS across multiple Availability Zones.
TRANSCRIPT:
Gou Rao: Hi guys, yeah, my name is Gou Rao, I’m with Portworx, thanks for having us. I’m here to talk about showing how Portworx works with EKS, but just a little bit of background on Portworx. We focus on running stateful applications in Kubernetes, so Kubernetes provides a cloud agnostic multi-region multi-zone compute layer. What Portworx does is, we are a storage overlay that sits behind Kubernetes that virtualizes your underlying storage so that your PVCs are available across multiple regions, multiple zones. So I’m gonna show you a couple of demos today on how that works within EKS. I’ll be showing how you can run stateful applications across different EKS clusters even if they’re in different regions or zones. I’ll also show you another demo where you can actually run applications on-prem and in EKS and facilitate the movement of stateful applications across these clusters. Just to show you how that works, I need to quickly set the stage for how Portworx itself works. Portworx is just another container that runs in your Kubernetes environment, it’s deployed as a DaemonSet, so there is a Portworx container running on every compute node.
The same thing is true of when running in your EKS cluster. When Portworx runs what it does is it virtualizes the underlying storage. So when you’re running in AWS, these are EBS drives, so Portworx will sit on top of your EBS volumes and it detects which region and which zone it’s in. The most important thing is it provides a layer on top of that from which when you deploy your applications like databases like Postgres, Cassandra, MySQL, those containers don’t actually see the underlying EBS drives. When they allocate storage, they get a Portworx virtual volume. So to that extent, Portworx is a software-defined storage solution, purpose-built for applications that are deployed as containers managed by a container orchestrator like Kubernetes. When your applications are running, the container granular volumes are directly provided by Portworx. So we support a global namespace block, we support different kind of volume workflows for applications like TensorFlow and so on. Portworx is part of the CNCF stack. It plugs in as a CSI provider. Portworx is the first implementation of a CSI provider and basically it sits behind the scenes and you are just using Kubernetes to allocate and create volumes, and Portworx is doing all of the heavy lifting behind the scenes for you.
I’m gonna get into a couple of demos, but just to show you some of the solution components and just to talk about what you’re going to see in the demo, Portworx takes care of the entire data life cycle management. So when you deploy a stateful application in Kubernetes with Portworx, everything all the way from volume provisioning through Kubernetes to the data life cycle management, for example, taking snapshots or encryption or backing the data up, all of that is managed directly by Kubernetes. It plugs in with various solution components, for example, Vault, if you’re using key management, S3, if you need a target to back your data up, plugs in with STORK which is a storage orchestrator for Kubernetes. I’ll be showing how STORK plays a role in doing some of the data life cycle management you’re gonna see today. So the first demo I wanna show you is two different EKS clusters running within AWS and different availability zones. Very common thing for people to do is have blue-green deployments, for example, you’ll have a test cluster and a production cluster, so we have a number of very large WordPress hosting sites.
In this demo, I’m gonna focus on WordPress, but what you’re going to see is going to work for any stateful application, Cassandra, Kafka, TensorFlow and so on. I really wanna emphasize this, we focus on storage and the data associated with it. So when I move applications from one cluster to another, it’s important to note that your entire… Your name space, your volumes, your data, all of that associated with it is moving from one cluster to another, so the heavy lifting of managing the data is what you want to focus on over here. So I’m just gonna log into my AWS Console. So you can see here, when I click on clusters, I have two clusters, one is called a test EKS cluster, which is what I’m going to log into first and a production EKS cluster, which is where I’ll be migrating my WordPress applications to. I’m just gonna quickly cut over to the Kubernetes console over here.
You can see that I already have WordPress up and running. I’ll log into the backend systems and show you the PVCs associated with it. Hopefully, this is visible… Is that good? So you can see on the right-hand side, I’m logged into the test EKS cluster. If you look at the nodes, you’ll see that I have three nodes over there, you can look at the Portworx name space and there is Portworx already up and running. If I look at the PVCs, you can see the PVCs that are up and bound and these are the PVCs that you see if you go into the Kubernetes console as well. Just log in. Click on my volumes and those are the volumes associated with this cluster over here. Now what I’m gonna do is show you the WordPress deployment, so I have MySQL running and up to three WordPress pods that are also running. You can see the deployment, WordPress is available. I will go over… Let me just copy the URL associated with this deployment. I just have a very simple, basic… Oops, sorry.
A really simple WordPress site that’s deployed and this WordPress site, it has WordPress containers which have their content volumes which is a global namespace. There’s a MySQL container associating with it which has a database volume and that’s what’s powering this test site. So now, what I could do is if I look at STORK to get the cluster pairs, you can see here that the two EKS clusters are paired. On the test cluster, I can see that it’s paired with an EKS cluster that’s running the production EKS cluster. So what I could do now is start on migration, so I’m going to say, “Migrate the entire WordPress application.” And what it does is it takes a snapshot, it takes a snapshot of the MySQL container, takes a snapshot of the WordPress containers, and starts migrating it to the production site. So I’ll just go over to the production cluster. You can see that I have three completely different nodes. Let me get the migration status. You can see that WordPress is currently being migrated. It’s being migrated from the test cluster, wait for the migration to complete, and so now the two clusters are in sync. So now I can hop over back to AWS. If I look at my production EKS cluster, you’ll see that this is also running, so I’ll just log in to the production EKS cluster over here, and I should see… Let me just log in.
And I’ll see that my volumes have been migrated over. So now I can go over to the production EKS cluster’s URL, and… Oops.
And my WordPress site has been migrated. So it’s pretty easy to do these things. Like I mentioned, we have a number of customers that are running WordPress in these clusters, everything that I showed over here can be done programmatically. If you’re running a large Kubernetes cluster, you’re probably not using the CLI each time so there’s a REST API, there’s a Golang API to do whatever I did programmatically. So I’m gonna quickly show you another demo, where I’m going to actually move data from an on-prem cluster or VM cluster to an EKS cluster. So the same concept. So, you look here, I have on my right hand side my virtual machine clusters so if I get the nodes, you can see that they’re running on-prem. I have a Postgres persistent volume claim. I’ll just look at the Postgres status and it’s running. If I get my deployments you’ll see that Postgres is up. What I’m gonna do is log in to the Postgres instance and show you the Portworx volume. You’ll see that there is a Portworx volume that’s up and running and it’s replicated the data to three nodes. We will go in and dump the databases that exist over here. Now what I could do is get the cluster pairs. I’ll show you how you pair the cluster. There is no cluster pairs currently.
So what I do is I get the cluster authentication information, I come over to EKS. In EKS, I could simply apply that cluster pair, and this will cause EKS to drill down into my virtual machine cluster and it has access to its data. So if I look here now, I can see my EKS environment is paired with my virtual machine environment. Based on that, what I could do is go over to my virtual machine environment and say, “Start migrating my application Postgres,” and then I’ll log in to my EKS environment, and if I get the migration status, you can see that Postgres is currently syncing. I’ll get the migration status again, Postgres is up and ready. If I get the PVCs, you’ll see that the Postgres PVC has moved over. Postgres as an entire application and its namespace have also been instantiated. I’m going to log into Postgres and dump the databases.
The other thing now that I can do since these clusters are paired, is directly from EKS, I can say, “Show me all of the applications that are running in VMware in the namespace “Eric”.” And it shows me these applications that are running. I can say, “Grab that entire namespace from my VM cluster and run it in my EKS cluster.” And what it does is that it starts snapshotting those applications, migrates the data, and I can get the migration status, and you can see that some are syncing, some have been migrated. So really what I’m trying to demonstrate here is with EKS, and Portworx, and Kubernetes, you can run these very complex stateful applications across multiple zones, multiple regions, multiple clouds. Again, everything is done programmatically, plugs into Kubernetes, simple to deploy. I think the last thing I wanna show is maybe a call to action. I think if you wanna find out more about Portworx, please visit us. I think the best place to get started at is docs.portworx.com. If you’re a developer, wanna use the programmatic API, go to openstorage.org. Running stateful applications, our job is to make it as easy as running ephemeral or stateless applications. Thank you very much.
Brent Langston: Come join us. So that was an amazing demo.
Gou Rao: Thank you.
Brent Langston: And it’s obvious that running stateful applications is definitely becoming easier. It’s also a very in-demand thing. What do you find to be the most common stateful application that customers are running?
Gou Rao: That’s a great question. I put it in a couple of different buckets when it comes to databases. Applications like Postgres, Cassandra, they’re pretty much at the top. We find a lot of people running message queues, Kafka, or if they’re doing a data pipeline for example, Kafka is involved. Another way to answer that question is by vertical as well, right? So if somebody is focused on WordPress, then the WordPress has an entire stack, which will involve MySQL and WordPress, so two different type of application containers. Another vertical where we’re seeing a lot of traction is in the IoT data science space. So TensorFlow is a very big, popular, stateful application that we see people running.
Abby Fuller: Awesome.
Brent Langston: Cool. That’s awesome.
Abby Fuller: Thank you so much.
Brent Langston: Thank you very much.
Abby Fuller: Another round of applause please.
Gou Rao: Thank you.
Abby Fuller: And then just go see Chris over there with your microphone.
Share
Subscribe for Updates
About Us
Portworx is the leader in cloud native storage for containers.
Thanks for subscribing!
Gou Rao
Portworx | Co-Founder and CTOExplore Related Content:
- databases
- eks
- wordpress