Twitter

This tutorial is a walk-through of how to deploy MySQL on Google Kubernetes Engine (GKE) by Certified Kubernetes Administrator (CKA) and Application Developer (CKAD) Janakiram MSV.

TRANSCRIPT:

Janakiram MSV: Hi. In this demo, I want to walk you through how to set up a high availability MySQL cluster on Google Kubernetes Engine, backed by Portworx. So we have a default GKE instance, configured with three worker nodes, running in asia-south1-a. So let’s take a look at the environment. We have a three node cluster with Portworx already configured, so we can get this information by querying the kube-system namespace with all the pods that match the label Portworx. This indicates that our environment is perfectly configured. A three node cluster with a Portworx storage cluster running on top of it. Now it’s time for us to install the prerequisites for running MySQL in HA. So the very first step in creating a HA cluster on top of Portworx is to create a storage class.

So a storage class is like a driver within the Kubernetes environment. While this definition looks very familiar to Kubernetes users, we’ll notice that Portworx comes with additional parameters, like replication factor, IO profile, file system, priority and so on. The most important aspect here is the replication factor, and here a replication factor of three indicates that data is going to be retained across three nodes. That means we are going to have a redundancy of two more nodes, so every time we write a piece of information to any of the nodes, it automatically gets replicated to other two nodes. We can also mention the file system. It is an optional parameter, but we can mention what kind of file system we are going to have the volume mounted from Portworx. So this is the first step in creating a workload backed by Portworx. So let’s go ahead and create the storage class.

So we can now query the storage classes available, and px-ha-sc indicates that we have a storage class created specifically for our workload. With that in place, the second step is to create a PVC, or the persistent volume claim. So a persistent volume claim is an intermediary layer between the workload, which is MySQL in our case, and the storage class. So here, we notice that we are annotating this object with the storage class that we created in the previous step. Thanks to dynamic provisioning, when we create this object in Kubernetes, it is going to result in three volumes across all the three nodes, because of the replication factor of three, and each volume is going to be of size 1 GB.

So this is the PVC. And again, unlike other environments, we don’t need to create a persistent volume first and the persistent volume claim later. That’s because we have dynamic provisioning, and dynamic provisioning ensures that the PVC is created based on a specific storage class, so this bypasses the step of creating a persistent volume first and then claiming that volume later. So let’s go ahead and create the PVC. So now we have the PVC getting created. We can query this with kubectl get pvc, and here we notice that there is a PVC that is created with 1 GB of size. And it is already bound, we don’t need to create a PV first and then the PVC, so it’s already bound. So with the storage class and PVC in place, it’s time for us to create the workload.

So let’s take a look at the workload. This is declared as a deployment, this is canary deployment in Kubernetes. And we are creating this with exactly one replica set. If you notice, the replicas is set as one. And the other important thing to note here is the volume mount is pointing to the PVC that we created in the previous step, and this ensures that the pod is backed by the exact PVC that is pointing to the storage cluster of Portworx. Everything else is a standard definition of a deployment and because we are targeting a replication factor of three, this plain vanilla deployment with just one replica set automatically becomes highly available. So let’s go ahead and create our MySQL database instance on top of Portworx.

Alright, so let’s wait till the pod is available, then we can query it. So currently, the container is getting created. In just a few seconds, this is going to be moving to the running status, with all the containers becoming available, and then we can access the MySQL shell. There we go. Now we have a single replica of MySQL running as a part of the deployment, so it’s time for us to introspect a few things. So let’s take a look at the volume associated with the PVC. So this volume is very specific to Portworx. So we are essentially querying the PVC associated with this so that we can get the volume name. So when we look at this volume name, it is specific to Portworx, bound to this PVC.

Now, we are going to grab the pod name of one of the Portworx pods, so this is going to show us the pod name of the Portworx DaemonSet. And now we can run the exec command with a binary that Portworx comes with called pxctl, and we are essentially inspecting the volume that is created through the PVC using dynamic provisioning. So now, when we actually inspect this, we notice a lot of interesting things here. First thing, it is of size 1 GB, based on our PVC. IO priority is medium and the format is XFS, is the file system. This specific volume is attached to one of the nodes of the GKE cluster, and because we set the replication factor to three, it is getting replicated across three different nodes. And we also notice that this is currently being controlled by the MySQL replica set, which is a part of the deployment. That’s it, now we are all set to run production workload on MySQL running on GKE and Portworx. Thanks for watching.

Share
Subscribe for Updates

About Us
Portworx is the leader in cloud native storage for containers.

Janakiram MSV

Janakiram MSV

Contributor | Certified Kubernetes Administrator (CKA) and Developer (CKAD)
link
Blog Placeholder
September 28, 2018 How To
Kubernetes Tutorial: How to Deploy PostreSQL on Google Kubernetes Engine (GKE)
Janakiram MSV
Janakiram MSV
link
Twitter
September 28, 2018 How To
Kubernetes Tutorial: How to Failover PostgreSQL on Google Kubernetes Engine (GKE)
Janakiram MSV
Janakiram MSV
link
Graphic-85
September 17, 2018 How To
How to Run HA MySQL on Google Kubernetes Engine (GKE)
Janakiram MSV
Janakiram MSV