Portworx Guided Hands On-Labs. Register Now

With SUSECON 2024 in the review mirror, Portworx by Pure Storage showcased how to unlock the value of Kubernetes data at enterprise scale on Rancher by SUSE. Organizations are not only increasingly adopting Kubernetes for their container orchestration needs, but they are also capitalizing on the scalability and flexibility of cloud native architecture by building mission-critical applications in their cloud native environments.

Mission-critical applications have enterprise-grade needs. As their Rancher environments scale, platform teams need a trusted technology partner to manage their increasingly complex data management needs. Portworx empowers Rancher users to automate, protect, and unify their Rancher clusters at enterprise scale.

In this blog, we will deep dive into practical demonstrations that show how Portworx can simplify and optimize key day 2 operations for Rancher users—specifically, how they can optimize storage performance for Rancher clusters with automated capacity management and better manage application IO control, as well as how to ensure enterprise-grade business continuity through high availability and asynchronous disaster recovery.

Portworx Disaster Recovery

In this demo I will be showing Portworx Disaster Recovery.

Portworx Disaster recovery synchronizes data by way of an object store, as well as all of the resources (services, deployments, etc) using STORK (Storage Operator Runtime for Kubernetes). This demo covers the basic setup, as well as the failover of our Portworx BBQ application.

I will have some detailed information after the video.

I showed the clusterPair object in the above video, which contains all of the connection information for our destination cluster. It is possible to create a manifest and apply it using kubectl, but for the above demo I used storkctl instead.

Our first step is to get the storkctl binary from our installation:

STORK_POD=$(kubectl get pods -n portworx -l name=stork -o jsonpath='{.items[0].metadata.name}')
kubectl cp -n portworx $STORK_POD:storkctl/linux/storkctl ${BINARY_DIR}/storkctl  --retries=10
chmod +x ${BINARY_DIR}/storkctl

Note that you will need to change ${BINARY_DIR} to the location on your filesystem that you want to install the utility.

Next, we can pair our clusters with the following storkctl command:

./storkctl create clusterpair demo \
    --dest-kube-file $DEST_KUBECONFIG \
    --src-kube-file $SRC_KUBECONFIG \
    --dest-ep $DST_PORTWORX_API \
    --src-ep $SRC_PORTWORX_API \
    --namespace portworx \
    --provider s3 \
    --s3-endpoint $S3_ENDPOINT \
    --s3-access-key $S3_ACCESS_KEY \
    --s3-secret-key $S3_SECRET_KEY \
    --s3-region $S3_REGION \
    --disable-ssl \

The above will create a clusterPair called ‘demo’ in both the source and destination cluster (so we are ready for a failback!).

One quick note: as a security feature, we do not allow a migrationSchedule object to migrate a different namespace unless we designate an admin namespace. Because I am the administrator of this Rancher cluster, I want to be able to migrate as many namespaces as I want with a single migrationSchedule manifest. For this to work, we need to specify the ‘portworx’ namespace as my admin namespace. To do this, run the following one-liner:

kubectl -n portworx patch StorageCluster $MYPXCLUSTERNAME --type='merge' -p '{"spec":{"stork":{"args":{"admin-namespace":"portworx"}}}}'

You will need to substitute $MYPXCLUSTERNAME with the name of your Portworx cluster.

Now we are ready to start our DR replication:

cat << EOF | kubectl apply -f -
apiVersion: stork.libopenstorage.org/v1alpha1
kind: MigrationSchedule
metadata:
  name: asyncdr-schedule
  namespace: portworx
  annotations:
spec:
  template:
    spec:
      clusterPair: demo
      includeResources: true
      startApplications: false
      includeVolumes: true
      namespaces:
      - pxbbq
  schedulePolicyName: default-interval-policy
  suspend: false
  autoSuspend: true
EOF

Finally, in the event of a disaster, we can start our PX BBQ application by running the following:

storkctl activate migration -n pxbbq

That’s it! We just started our app in the DR cluster.

Portworx Autopilot

Autopilot is a feature that allows Kubernetes administrators to set rules to grow PVCs when a usage threshold is met. Autopilot also allows an administrator to grow the back-end storage pool using cloud drives.

Building demos can always be a challenging experience, but this is the first time I needed to build demos that were destined to be played on monitors at a conference without sound. I owe a lot to my co-worker Erik Shanks for taking the time to review these and add extra feedback!

Here is the demo playing at the SUSECON booth:

 

Here is the autopilot rule I used in the video

apiVersion: autopilot.libopenstorage.org/v1alpha1 
kind: AutopilotRule 
metadata: 
name: volume-resize 
spec: 
 ##### selector filters the objects affected by this rule given labels 
 selector: 
   matchLabels: 
     app: disk-filler 
 ##### namespaceSelector selects the namespaces of the objects affected by this rule 
 namespaceSelector: 
   matchLabels: 
     type: db 
 ##### conditions are the symptoms to evaluate. All conditions are AND'ed 
 conditions: 
   # volume usage should be less than 30% 
   expressions: 
   - key: "100 * (px_volume_usage_bytes / px_volume_capacity_bytes)" 
     operator: Gt 
     values: 
       - "30" 
 ##### action to perform when condition is true 
 actions: 
 - name: openstorage.io.action.volume/resize 
   params: 
     # resize volume by scalepercentage of current size 
     scalepercentage: "100" 
     # volume capacity should not exceed 20GiB 
     maxsize: "20Gi"

Autopilot rules have a number of configuration options:

  • Line 9: Target PVCs with the Kubernetes label app: disk-filler
  • Line 13: Target PVCs in namespaces with the label type: db
  • Lines 18-21: Monitor if capacity usage grows to or above 30%
  • Line 28: Automatically grow the volume and underlying filesystem by 100% of the current volume size if usage above 30% is detected
  • # Line 30: Now grow the volume by 100%

 

For those that want to fill out there own disks to test the above rule, here is the namespace, PVC and Pod I used:

apiVersion: v1 
kind: Namespace 
metadata: 
 name: autopilot 
 labels: 
   type: db
---
kind: PersistentVolumeClaim 
apiVersion: v1 
metadata: 
 name: data 
 labels: 
   app: disk-filler 
spec: 
 storageClassName: px-csi-db 
 accessModes: 
   - ReadWriteOnce 
 resources: 
   requests: 
     storage: 10Gi 
---
apiVersion: v1 
kind: Pod 
metadata: 
 name: disk-filler 
 labels: 
   app: disk-filler 
spec: 
 volumes: 
 - name: data 
   persistentVolumeClaim: 
     claimName: data 
 terminationGracePeriodSeconds: 5 
 containers: 
 - image: busybox 
   imagePullPolicy: Always 
   name: busybox 
   volumeMounts: 
   - name: data 
     mountPath: "/mnt" 
   command: 
     - sh 
   args: 
     - -c 
     - | 
       i=0 
       until [ $i = 8 ] ; do 
         dd if=/dev/urandom of=/mnt/sample-$i.txt bs=1G count=1 iflag=fullblock 
         $(( i++ )) 
         sleep 1 
       done 
       exit

Portworx Application IO Control Demo

Application IO Control is a Portworx feature that allows an administrator to throttle IO from persistent volumes by specifying IO and bandwidth limits.

In this demo, I talk through a couple of options on how to implement Application IO Control.

As I mentioned in the demo, it is possible to set IO limits in the storageclass. Here is the storageclass I showed in the video:

allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: px-csi-db
parameters:
  io_profile: db_remote
  repl: "3"
  io_throttle_rd_iops: "750"
  io_throttle_wr_iops: "750"
  #  io_throttle_rd_bw: "10"
  #  io_throttle_wr_bw: "10"
provisioner: pxd.portworx.com
reclaimPolicy: Delete
volumeBindingMode: Immediate

It is important to note that we cannot throttle IOPS and bandwidth for the same operation (e.g. reads). The bandwidth throttle is measured in MBps.

See this link for more details: https://docs.portworx.com/portworx-enterprise/platform/openshift/ocp-gcp/operations/storage-operations/io-throttling.html

We can also throttle using pxctl

The syntax of the command is: pxctl volume update --max_iops 750,750 ${VolumeName}

This will set a 750 read and write IOPS limit on the volume ${VolumeName}. This can be done after the volume is provisioned, which I find useful to throttle applications that you notice are running away with all of your storage performance.

For those that want to test out the Grafana dashboards I was using to monitor storage performance, check out this page of our documentation. It goes in to a lot of detail on how to configure monitoring in a Rancher environment.

Portworx High Availability

Portworx High Availability is a fundamental feature of Portworx that enables the restarting of Kubernetes pods that have PVCs attached.

See HA in action in our recent demo for SUSECON and read on after the video for some technical tidbits.

High Availability in Portworx is governed by 2 components:

The storageClass replication factor parameter tells Portworx how many copies of your data you want to keep. The placement of this data can be governed by availability zone awareness, which is a helpful feature public cloud providers or datacenters that have a pod architecture. This resiliency is internal to Kubernetes, and independent of the underlying disk type. This means that Portworx will protect your application if it is running on local disk, or even in the event of a public cloud availability zone failure.

Second is STORK (Storage Operator Runtime for Kubernetes). STORK is a scheduler extension that adds storage-aware placement rules to pods that are being scheduled. For example: STORK can ensure that pods are placed on nodes that have a copy of the data already.

Check out our other SUSECON Demos:

CD Pipelines with Portworx and Harness

Share
Subscribe for Updates

About Us
Portworx is the leader in cloud native storage for containers.

a person with sunglasses

Chris Crow

Chris is a Technical Marketing Engineer for Portworx and a bash enthusiast.

link
Data on Kubernetes, Salt Lake city
August 20, 2024 Events
Data on Kubernetes, Portworx, and you at KubeCon NA 2024
Kathryn Hsu
Kathryn Hsu
link
rivian blog
June 26, 2024 Events
Breakthrough Award Winner: Rivian Automotive, Cloud Champion
Michele Jackson
Michele Jackson
link
June 18, 2024 Events
SUSECON Demo Series: CD Pipelines with Portworx and Harness
Chris Crow
Chris Crow