Portworx & Red Hat Hands-on Labs Register Now

Graphic-45

This post is part of our ongoing series on running Elasticsearch (ELK) for Kubernetes.  We’ve published a number of articles about running Elasticsearch on Kubernetes for specific platforms and for specific use cases.  If you are looking for a specific Kubernetes platform, check out these related articles.

Running HA ELK on Amazon Elastic Container Service for Kubernetes (EKS)

Running HA ELK on Google Kubernetes Engine (GKE)

Running HA ELK on Azure Kubernetes Service (AKS)

Running HA ELK on Red Hat OpenShift

Running HA ELK with Rancher Kubernetes Engine (RKE)

Running HA ELK on IBM Cloud Private

And now, onto the post…

IBM Cloud Kubernetes Service is a managed Kubernetes offering running in IBM Cloud. It is designed to deliver powerful tools, intuitive user experience, and built-in security for rapid delivery of applications that can be bound to cloud services related to IBM Watson, IoT, DevOps and data analytics. As a CNCF certified Kubernetes provider, IBM Cloud Kubernetes Service provides intelligent scheduling, self-healing, horizontal scaling, service discovery and load balancing, automated rollouts and rollbacks, and secret and configuration management. The service also has advanced capabilities around simplified cluster management, container security, and isolation policies, the ability to design a cluster with a custom configuration and integrated operational tools for consistency in deployment.

Portworx is a cloud native storage platform to run persistent workloads deployed on a variety of orchestration engines including Kubernetes. With Portworx, customers can manage the database of their choice on any infrastructure using any container scheduler. It provides a single data management layer for all stateful services, no matter where they run.

This tutorial is a walk-through of the steps involved in deploying and managing a highly available ELK stack on IBM Cloud Kubernetes Service (IKS).

In summary, to run ELK stack on IKS you need to:

  • Launch an IKS cluster running on bare metal servers with software-defined storage (SDS)
  • Install a cloud native storage solution like Portworx as DaemonSet on IKS
  • Create a storage class defining your storage requirements like replication factor, snapshot policy, and performance profile
  • Deploy Elasticsearch as a StatefulSet on Kubernetes
  • Deploy Kibana ReplicaSet on Kubernetes
  • Ingest data from Logstash into Elasticsearch, and visualize it through Kibana dashboard
  • Test failover by killing or cordoning nodes in your cluster
  • Take an application consistent backup with 3DSnap and restore Elasticsearch cluster

Launching an IKS Cluster

For running stateful workloads in a production environment backed by Portworx, it is highly recommended to launch an IKS cluster based on bare metal servers and software-defined storage. The minimum requirements of a worker node to successfully run Portworx include:

  • 4 CPU cores
  • 4GB memory
  • 128GB of raw unformatted storage
  • 10Gbps network speed

For details on launching a Kubernetes cluster with bare metal worker nodes, please refer to the documentation of IBM Cloud Kubernetes Service.

We are using an IKS cluster with 4 nodes out of which 3 nodes are running bare metal servers with SDS based on the instance type ms2c.4x32.1.9tb.ssd.encrypted. Only these machines that meet the prerequisite would be used by Portworx.

elk-iks-0

When we filter the nodes based on the label, we see the below nodes:

$ kubectl get nodes -l beta.kubernetes.io/instance-type=ms2c.4x32.1.9tb.ssd.encrypted
NAME           STATUS   ROLES    AGE    VERSION
10.177.26.18   Ready    <none>   4d7h   v1.13.2+IKS
10.185.22.28   Ready    <none>   4d7h   v1.13.2+IKS
10.73.90.131   Ready    <none>   4d3h   v1.13.2+IKS

To exclude nodes that don’t meet Portworx prerequisites, you can apply a label to skip the installation of Portworx. For example, the below command applies a label on the node with name 10.185.22.14 which doesn’t run on a bare metal server.

$ kubectl label nodes 10.185.22.14  px/enabled=false --overwrite

Installing Portworx in IKS

Installing Portworx on IKS is not very different from installing it on any other Kubernetes cluster. It is recommended that you create an etcd instance through Compose for etcd. You can use the Helm Chart to install Portworx cluster in IKS. Portworx documentation for IKS has the prerequisites and instructions to install and configure Portworx, STORK, and other components.

At the end of the installation, we will have Portworx DaemonSet running on the nodes excluding those that are filtered out in the previous step.

elk-iks-1

Once the IKS cluster is up and running, and Portworx is installed and configured, we will deploy a highly available ELK stack.

Creating a storage class for ELK stack

Once the IKS cluster is up and running, and Portworx is installed and configured, we will deploy a highly available ELK stack in Kubernetes.

Through storage class objects, an admin can define different classes of Portworx volumes that are offered in a cluster. These classes will be used during the dynamic provisioning of volumes. The storage class defines the replication factor, I/O profile (e.g., for a database or a CMS), and priority (e.g., SSD or HDD). These parameters impact the availability and throughput of workloads and can be specified for each volume. This is important because a production database will have different requirements than a development Jenkins cluster.

$ cat > px-elk-sc.yaml << EOF
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
    name: px-ha-sc
provisioner: kubernetes.io/portworx-volume
parameters:
   repl: "3"
EOF

Create the storage class and verify it’s available in the default namespace.

$ kubectl create -f px-elk-sc.yaml
storageclass.storage.k8s.io/px-ha-sc created

$ kubectl get sc
NAME                         PROVISIONER                     AGE
default                      ibm.io/ibmc-file                8d
ibmc-file-bronze (default)   ibm.io/ibmc-file                8d
ibmc-file-custom             ibm.io/ibmc-file                8d
ibmc-file-gold               ibm.io/ibmc-file                8d
ibmc-file-retain-bronze      ibm.io/ibmc-file                8d
ibmc-file-retain-custom      ibm.io/ibmc-file                8d
ibmc-file-retain-gold        ibm.io/ibmc-file                8d
ibmc-file-retain-silver      ibm.io/ibmc-file                8d
ibmc-file-silver             ibm.io/ibmc-file                8d
portworx-db-sc               kubernetes.io/portworx-volume   19h
portworx-db2-sc              kubernetes.io/portworx-volume   19h
portworx-null-sc             kubernetes.io/portworx-volume   19h
portworx-shared-sc           kubernetes.io/portworx-volume   19h
px-ha-sc                     kubernetes.io/portworx-volume   7s
px-repl3-sc                  kubernetes.io/portworx-volume   41h
stork-snapshot-sc            stork-snapshot                  19h

Deploying Elasticsearch StatefulSet on IKS

Finally, let’s create an Elasticsearch cluster as a Kubernetes StatefulSet object. Like a Kubernetes deployment, a StatefulSet manages pods that are based on an identical container spec. Unlike a deployment, a StatefulSet maintains a sticky identity for each of their Pods. For more details on StatefulSets, refer to Kubernetes documentation.

A StatefulSet in Kubernetes requires a headless service to provide network identity to the pods it creates. The following command and the spec will help you create a headless service for your Elasticsearch installation.

$ cat > px-elastic-svc.yaml << EOF
kind: Service
apiVersion: v1
metadata:
  name: elasticsearch
  labels:
    app: elasticsearch
spec:
  selector:
    app: elasticsearch
  clusterIP: None
  ports:
    - port: 9200
      name: rest
    - port: 9300
      name: inter-node
EOF
$ kubectl create -f px-elastic-svc.yaml
service/elasticsearch created

Now, let’s go ahead and create a StatefulSet running Elasticsearch cluster based on the below spec.

cat > px-elastic-app.yaml << EOF
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: es-cluster
spec:
  serviceName: elasticsearch
  replicas: 3
  selector:
    matchLabels:
      app: elasticsearch
  template:
    metadata:
      labels:
        app: elasticsearch
    spec:
      containers:
      - name: elasticsearch
        image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.4.3
        resources:
            limits:
              cpu: 1000m
            requests:
              cpu: 100m
        ports:
        - containerPort: 9200
          name: rest
          protocol: TCP
        - containerPort: 9300
          name: inter-node
          protocol: TCP
        volumeMounts:
        - name: data
          mountPath: /usr/share/elasticsearch/data
        env:
          - name: cluster.name
            value: px-elk-demo
          - name: node.name
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: discovery.zen.ping.unicast.hosts
            value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"
          - name: discovery.zen.minimum_master_nodes
            value: "2"
          - name: ES_JAVA_OPTS
            value: "-Xms512m -Xmx512m"
      initContainers:
      - name: fix-permissions
        image: busybox
        command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
        securityContext:
          privileged: true
        volumeMounts:
        - name: data
          mountPath: /usr/share/elasticsearch/data
      - name: increase-vm-max-map
        image: busybox
        command: ["sysctl", "-w", "vm.max_map_count=262144"]
        securityContext:
          privileged: true
      - name: increase-fd-ulimit
        image: busybox
        command: ["sh", "-c", "ulimit -n 65536"]
        securityContext:
          privileged: true
  volumeClaimTemplates:
  - metadata:
      name: data
      annotations:
        volume.beta.kubernetes.io/storage-class: px-ha-sc      
      labels:
        app: elasticsearch
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: px-ha-sc
      resources:
        requests:
          storage: 10Gi
EOF
$ kubectl apply -f px-elastic-app.yaml
statefulset.apps/es-cluster created

Verify that all the pods are in the Running state before proceeding further.

$ kubectl get statefulset
NAME        DESIRED   CURRENT   AGE
es-cluster   3         3         36s

elk-iks-2

Let’s also check if persistent volume claims are bound to the volumes.

$ kubectl get pvc
NAME                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
data-es-cluster-0   Bound    pvc-f344425d-31e7-11e9-930d-4e511e6b17c9   10Gi       RWO            px-ha-sc       78s
data-es-cluster-1   Bound    pvc-010cc073-31e8-11e9-930d-4e511e6b17c9   10Gi       RWO            px-ha-sc       55s
data-es-cluster-2   Bound    pvc-0b75991a-31e8-11e9-930d-4e511e6b17c9   10Gi       RWO            px-ha-sc       38s

Notice the naming convention followed by Kubernetes for the pods and volume claims. The arbitrary number attached to each object indicates the association of pods and volumes.

We can now inspect the Portworx volume associated with one of the Elasticsearch pods by accessing the pxctl tool.

$ PX_POD=$(kubectl get pods -l name=portworx -n kube-system -o jsonpath='{.items[0].metadata.name}')
$ VOL=`kubectl get pvc | grep es-cluster-0 | awk '{print $3}'`
$ kubectl exec -it $PX_POD -n kube-system -- /opt/pwx/bin/pxctl volume inspect ${VOL}
Volume	:  29349030710724527
	Name            	 :  pvc-f344425d-31e7-11e9-930d-4e511e6b17c9
	Size            	 :  10 GiB
	Format          	 :  ext4
	HA              	 :  3
	IO Priority     	 :  HIGH
	Creation time   	 :  Feb 16 12:39:50 UTC 2019
	Shared          	 :  no
	Status          	 :  up
	State           	 :  Attached: 6ab3face-615f-4cc7-bcfa-a1872d006e34 (10.185.22.29)
	Device Path     	 :  /dev/pxd/pxd29349030710724527
	Labels          	 :  namespace=default,pvc=data-es-cluster-0
	Reads           	 :  13
	Reads MS        	 :  32
	Bytes Read      	 :  53248
	Writes          	 :  199
	Writes MS       	 :  60
	Bytes Written   	 :  168095744
	IOs in progress 	 :  0
	Bytes used      	 :  2.9 MiB
	Replica sets on nodes:
		Set 0
		  Node 		 : 10.73.90.131 (Pool 0)
		  Node 		 : 10.177.26.18 (Pool 0)
		  Node 		 : 10.185.22.29 (Pool 0)
	Replication Status	 :  Up
	Volume consumers	 :
		- Name           : es-cluster-0 (f34649fa-31e7-11e9-930d-4e511e6b17c9) (Pod)
		  Namespace      : default
		  Running on     : 10.185.22.29
		  Controlled by  : es-cluster (StatefulSet)

elk-iks-3-1024x806

$ kubectl port-forward es-cluster-0 9200:9200 &
[1] 18200
$ curl localhost:9200
{
  "name" : "es-cluster-0",
  "cluster_name" : "px-elk-demo",
  "cluster_uuid" : "6H-8Dn_FSo-PuWz2_5tNYQ",
  "version" : {
    "number" : "6.4.3",
    "build_flavor" : "oss",
    "build_type" : "tar",
    "build_hash" : "fe40335",
    "build_date" : "2018-10-30T23:17:19.084789Z",
    "build_snapshot" : false,
    "lucene_version" : "7.4.0",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

Let’s get the count of the nodes.

$ curl -s localhost:9200/_nodes | jq ._nodes
{
  "total": 3,
  "successful": 3,
  "failed": 0
}

Deploying Kibana on IKS

Kibana exposes a port for accessing the UI. Let’s start by creating the service first.

cat > px-kibana-svc.yaml << EOF
apiVersion: v1
kind: Service
metadata:
  name: kibana
  labels:
    app: kibana
spec:
  ports:
  - port: 5601
  selector:
    app: kibana
EOF
$ kubectl create -f px-kibana-svc.yaml
service/kibana created

Create the Kibana deployment with the following YAML file.

cat > px-kibana-app.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana
  labels:
    app: kibana
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kibana
  template:
    metadata:
      labels:
        app: kibana
    spec:
      containers:
      - name: kibana
        image: docker.elastic.co/kibana/kibana-oss:6.4.3
        resources:
          limits:
            cpu: 1000m
          requests:
            cpu: 100m
        env:
          - name: ELASTICSEARCH_URL
            value: http://elasticsearch:9200
        ports:
        - containerPort: 5601
EOF
$ kubectl create -f px-kibana-app.yaml
deployment.apps/kibana created

We can verify Kibana installation by accessing the UI from the browser. Before that, let’s expose the internal IP to our development machine. Once it is done, you can access the UI from http://localhost:5601.

$ KIBANA_POD=$(kubectl get pods -l app=kibana -o jsonpath='{.items[0].metadata.name}')
$ kubectl port-forward $KIBANA_POD 5601:5601 &
[1] 40162

elk-iks-4

Ingesting data into Elasticsearch through Logstash

Now, we are ready to ingest data into the Elasticsearch through Logstash. For this, we will use the Docker image of Logstash running in your development machine.

Let’s get some sample data from one of the Github repositories of Elasticsearch.

Create a directory and fetch the dataset into that. Uncompress the dataset with the gzip utility.

$ mkdir logstash && cd logstash
$ wget https://github.com/elastic/elk-index-size-tests/raw/master/logs.gz
--2018-12-10 10:34:06--  https://github.com/elastic/elk-index-size-tests/raw/master/logs.gz
Resolving github.com (github.com)... 192.30.253.113, 192.30.253.112
Connecting to github.com (github.com)|192.30.253.113|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://raw.githubusercontent.com/elastic/elk-index-size-tests/master/logs.gz [following]
--2018-12-10 10:34:08--  https://raw.githubusercontent.com/elastic/elk-index-size-tests/master/logs.gz
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.0.133, 151.101.64.133, 151.101.128.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.0.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 6632680 (6.3M) [application/octet-stream]
Saving to: 'logs.gz'

logs.gz                               100%[=======================================================================>]   6.33M  9.13MB/s    in 0.7s

2018-12-10 10:34:09 (9.13 MB/s) - 'logs.gz' saved [6632680/6632680]
$ gzip -d logs.gz

Logstash needs a configuration file that points the agent to the source log file and the target Elasticsearch cluster.

Create the below configuration file in the same directory.

$ cat > logstash.conf < "/data/logs"
		type => "logs"
		start_position => "beginning"
	}

}

filter
{
	grok{
		match => {
			"message" => "%{COMBINEDAPACHELOG}"
		}
	}
	mutate{
		convert => { "bytes" => "integer" }
	}
	date {
		match => [ "timestamp", "dd/MMM/YYYY:HH:mm:ss Z" ]
		locale => en
		remove_field => "timestamp"
	}
	geoip {
		source => "clientip"
	}
	useragent {
		source => "agent"
		target => "useragent"
	}
}


output
{
	stdout {
		codec => dots
	}

 	elasticsearch {
		hosts => [ "docker.for.mac.localhost:9200" ]
 	}

}
EOF

Notice how Logstash talks to Elasticsearch within the Docker container. The alias docker.for.mac.localhost maps to the host port on which the Docker VM is running. If you are running it on Windows machine, use the string docker.for.win.localhost.

With the sample log and configuration files in place, let’s launch the Docker container. We are passing an environment variable, running the container in host networking mode, and mounting the ./logstash directory as /data within the container.

Navigate back to the parent directory and launch the Logstash Docker container.

$ cd ..
$ docker run --rm -it --network host\
>	-e XPACK_MONITORING_ENABLED=FALSE \
>	-v $PWD/logstash:/data docker.elastic.co/logstash/logstash:6.5.1 \
>	/usr/share/logstash/bin/logstash -f /data/logstash.conf

After a few seconds, the agent starts streaming the log file to Elasticsearch cluster.

elk-iks-5-1024x576

Switch to the browser to access the Kibana dashboard. Click the index pattern for Logstash by clicking on the Management tab and choosing @timestamp as the time filter field.

elk-iks-6

elk-iks-7

Click on the Discover tab, choose the timepicker and select Last 5 Years as the range. You should see Apache logs in the dashboard.

elk-iks-8-1024x468

In a few minutes, the Logstash agent running in the Docker container will ingest all the data.

Failing over an Elasticsearch pod on Kubernetes

When one of the nodes running en Elasticsearch pod goes down, the pod will automatically get scheduled in another node with the same PVC backing it.

We will simulate the failover by cordoning off one of the nodes and deleting the Elasticsearch pod deployed on it. When the new pod is created it has the same number of documents as the original pod.

First, let’s get the count of the documents indexed and stored on node es-cluster-0. We can access this by calling the HTTP endpoint of the node.

$ EL_NODE_NAME=`curl -s 'http://localhost:9200/_nodes/es-cluster-0/stats/indices?pretty=true' | jq -r '.nodes | keys[0]'`

$ curl -s 'http://localhost:9200/_nodes/es-cluster-0/stats/indices?pretty=true' | jq .nodes.${EL_NODE_NAME}.indices.docs.count
140770

Let’s get the node name where the first Elasticsearch pod is running.

$ NODE=`kubectl get pods es-cluster-0 -o json | jq -r .spec.nodeName`

Now, let’s simulate the node failure by cordoning off the Kubernetes node.

$ kubectl cordon ${NODE}
node/10.185.22.29 cordoned

The above command disabled scheduling on one of the nodes.

$ kubectl get nodes
NAME           STATUS                     ROLES    AGE   VERSION
10.177.26.18   Ready,SchedulingDisabled      8d    v1.13.2+IKS
10.185.22.14   Ready                         8d    v1.13.2+IKS
10.185.22.29   Ready,SchedulingDisabled      15h   v1.13.2+IKS
10.73.90.131   Ready                         8d    v1.13.2+IKS

Let’s go ahead and delete the pod es-cluster-0 running on the node that is cordoned off.

$ kubectl delete pod es-cluster-0
pod "es-cluster-0" deleted

Kubernetes controller now tries to create the pod on a different node.

$ kubectl get pods
NAME          READY     STATUS              RESTARTS   AGE
NAME                     READY   STATUS     RESTARTS   AGE
es-cluster-0             0/1     Init:0/3   0          4s
es-cluster-1             1/1     Running    0          9m37s
es-cluster-2             1/1     Running    0          9m20s
kibana-87b7b8cdd-sw45v   1/1     Running    0          4m25s

Wait for the pod to be in Running state on the node.

$ kubectl get pods 
NAME          READY     STATUS              RESTARTS   AGE     
NAME                     READY   STATUS     RESTARTS   AGE
es-cluster-0             0/1     Init:0/3   0          42s
es-cluster-1             1/1     Running    0          10m
es-cluster-2             1/1     Running    0          9m58s
kibana-87b7b8cdd-sw45v   1/1     Running    0          5m3s

Finally, let’s verify that the data is still available.

$ EL_NODE_NAME=`curl -s 'http://localhost:9200/_nodes/es-cluster-0/stats/indices?pretty=true' | jq -r '.nodes | keys[0]'`

$ curl -s 'http://localhost:9200/_nodes/es-cluster-0/stats/indices?pretty=true' | jq .nodes.${EL_NODE_NAME}.indices.docs.count
140770

The matching document count confirms that the pod is backed by the same PV.

Capturing Application Consistent Snapshots to Restore Data

Portworx enables storage admins to perform backup and restore operations through the snapshots. 3DSnap is a feature to capture consistent snapshots from multiple nodes of a database cluster. This is highly recommended when running a multi-node Elasticsearch cluster as a Kubernetes StatefulSet. The 3DSnap will create a snapshot from each of the nodes in the cluster, which ensures that the state is accurately captured from the distributed cluster.

3DSnap allows administrators to execute commands just before taking the snapshot and right after completing the task of taking a snapshot. These triggers will ensure that the data is fully committed to the disk before the snapshot. Similarly, it is possible to run a workload-specific command to refresh or force sync immediately after restoring the snapshot.

This section will walk you through the steps involved in creating and restoring a 3DSnap for the Elasticsearch statefulset.

Creating a 3DSnap

It’s a good idea to flush the data to the disk before initiating the snapshot creation. This is defined through a rule, which is a Custom Resource Definition created by Stork, a Kubernetes scheduler extender and Operator created by Portworx.

$ cat > px-elastic-rule.yaml << EOF
apiVersion: stork.libopenstorage.org/v1alpha1
kind: Rule
metadata:
  name: px-elastic-rule
spec:
  - podSelector:
      app: elasticsearch
    actions:
    - type: command
      value: curl -s 'http://localhost:9200/_all/_flush'
EOF

Create the rule from the above YAML file.

$ kubectl create -f px-elastic-rule.yaml
rule.stork.libopenstorage.org "px-elastic-rule" created

We will now initiate a 3DSnap task to backup all the PVCs associated with the Elasticsearch pods belonging to the StatefulSet.

$ cat > px-elastic-snap.yaml << EOF
apiVersion: volumesnapshot.external-storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
  name: elastic-3d-snapshot
  annotations:
    portworx.selector/app: elasticsearch
    stork.rule/pre-snapshot: px-elastic-rule
spec:
  persistentVolumeClaimName: data-es-cluster-0
EOF
$ kubectl create -f px-elastic-snap.yaml
volumesnapshot.volumesnapshot.external-storage.k8s.io "elastic-3d-snapshot" created

Let’s now verify that the snapshot creation is successful.

$ kubectl get volumesnapshot
NAME                                                                         AGE
elastic-3d-snapshot                                                          21s
elastic-3d-snapshot-data-es-cluster-0-1e664b65-2189-11e9-865e-de77e24fecce   11s
elastic-3d-snapshot-data-es-cluster-1-1e664b65-2189-11e9-865e-de77e24fecce   12s
elastic-3d-snapshot-data-es-cluster-2-1e664b65-2189-11e9-865e-de77e24fecce   11s
$ kubectl get volumesnapshotdatas
NAME                                                                         AGE
elastic-3d-snapshot-data-es-cluster-0-1e664b65-2189-11e9-865e-de77e24fecce   35s
elastic-3d-snapshot-data-es-cluster-1-1e664b65-2189-11e9-865e-de77e24fecce   36s
elastic-3d-snapshot-data-es-cluster-2-1e664b65-2189-11e9-865e-de77e24fecce   35s
k8s-volume-snapshot-24636a1c-2189-11e9-b33a-4a3867a50193                     35s
Restoring from a 3DSnap

Let’s now restore from the 3DSnap. Before that, we will simulate the crash by deleting the StatefulSet and associated PVCs.

$ kubectl delete sts es-cluster
statefulset.apps "es-cluster" deleted
$ kubectl delete pvc -l app=elasticsearch
persistentvolumeclaim "data-es-cluster-0" deleted
persistentvolumeclaim "data-es-cluster-1" deleted
persistentvolumeclaim "data-es-cluster-2" deleted

Now our Kubernetes cluster has no Elasticsearch instance running. Let’s go ahead and restore the data from the snapshot before relaunching the StatefulSet.

We will now create three Persistent Volume Claims (PVCs) from existing 3DSnap with exactly the same volume name that the StatefulSet expects. When the pods are created as a part of the StatefulSet, they point to the existing PVCs which are already populated with the data restored from the snapshots.

Let’s create three PVCs from the 3DSnap snapshots. Notice how the annotation points to the snapshot in each PVC manifest.

$ cat > px-elastic-pvc-0.yaml << EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: data-es-cluster-0
  labels:
     app: elasticsearch  
  annotations:
    snapshot.alpha.kubernetes.io/snapshot: "elastic-3d-snapshot-data-es-cluster-0-1e664b65-2189-11e9-865e-de77e24fecce"
spec:
  accessModes:
     - ReadWriteOnce
  storageClassName: stork-snapshot-sc
  resources:
    requests:
      storage: 5Gi
EOF
$ cat > px-elastic-pvc-1.yaml << EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: data-es-cluster-1
  labels:
     app: elasticsearch  
  annotations:
    snapshot.alpha.kubernetes.io/snapshot: "elastic-3d-snapshot-data-es-cluster-1-1e664b65-2189-11e9-865e-de77e24fecce"
spec:
  accessModes:
     - ReadWriteOnce
  storageClassName: stork-snapshot-sc
  resources:
    requests:
      storage: 5Gi
EOF
$ cat > px-elastic-pvc-2.yaml << EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: data-es-cluster-2
  labels:
     app: elasticsearch  
  annotations:
    snapshot.alpha.kubernetes.io/snapshot: "elastic-3d-snapshot-data-es-cluster-2-1e664b65-2189-11e9-865e-de77e24fecce"
spec:
  accessModes:
     - ReadWriteOnce
  storageClassName: stork-snapshot-sc
  resources:
    requests:
      storage: 5Gi
EOF

Create the PVCs from the above definitions.

$ kubectl create -f px-elastic-snap-pvc-0.yaml
persistentvolumeclaim "data-es-cluster-0" created

$ kubectl create -f px-elastic-snap-pvc-1.yaml
persistentvolumeclaim "data-es-cluster-1" created

$ kubectl create -f px-elastic-snap-pvc-2.yaml
persistentvolumeclaim "data-es-cluster-2" created

Verify that the new PVCs are ready and bound.

$ kubectl get pvc
NAME                         STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        AGE
data-es-cluster-0   Bound     pvc-31389fa1-218a-11e9-865e-de77e24fecce   5Gi        RWO            stork-snapshot-sc   12s
data-es-cluster-1   Bound     pvc-3319c230-218a-11e9-865e-de77e24fecce   5Gi        RWO            stork-snapshot-sc   9s
data-es-cluster-2   Bound     pvc-351bb0b6-218a-11e9-865e-de77e24fecce   5Gi        RWO            stork-snapshot-sc   5s

With the PVCs in place, we are ready to launch the StatefulSet with no changes to the YAML file. Everything remains exactly the same while the data is already restored from the snapshots.

$ kubectl create -f px-elastic-app.yaml
statefulset.apps "es-cluster" created

Check the data through the curl request sent to one the Elastic pods.

$ kubectl port-forward es-cluster-0 9200:9200 &
$ EL_NODE_NAME=`curl -s 'http://localhost:9200/_nodes/es-cluster-0/stats/indices?pretty=true'| jq -r '.nodes | keys[0]'`
$ curl -s 'http://localhost:9200/_nodes/es-cluster-0/stats/indices?pretty=true' | jq .nodes.${EL_NODE_NAME}.indices.docs.count
140770

Congratulations! You have successfully restored an application consistent snapshot for Elasticsearch.

Summary

Portworx can be easily deployed on IBM Cloud Kubernetes Service to run stateful workloads in production. Through the integration of STORK, DevOps and StorageOps teams can seamlessly run highly available database clusters in IKS. They can perform traditional operations such as volume expansion, backup, and recovery for the cloud native applications in an automated and efficient manner.

Share
Subscribe for Updates

About Us
Portworx is the leader in cloud native storage for containers.

janaki

Janakiram MSV

Contributor | Certified Kubernetes Administrator (CKA) and Developer (CKAD)
Explore Related Content:
  • databases
  • elastic
  • elasticsearch
  • elk
  • Ibm
  • iks
  • kubernetes
link
Graphic-19
January 30, 2019 How To
Kubernetes Elasticsearch tutorial: How to Run HA the ELK stack on Azure Kubernetes Service
Janakiram MSV
Janakiram MSV
link
Graphic-39
March 21, 2019 How To
Run HA PostgreSQL on IBM Cloud Kubernetes Service
Janakiram MSV
Janakiram MSV
link
Graphic-17
January 29, 2019 How To
Kubernetes ELK: How to Run HA Elasticsearch (ELK) on Google Kubernetes Engine
Janakiram MSV
Janakiram MSV