Portworx & Red Hat Hands-on Labs Register Now
2023 was the year of scale for customers using Portworx Enterprise for their container data management. Platform engineering teams notably broadened their deployment reach, taking substantial strides toward establishing multi-tenant application development platforms within hybrid infrastructure settings. Recognized as a leader in container data management by IDC, Portworx Enterprise attracted customers operating mission-critical applications with Kubernetes, guided by three fundamental principles in their architectural decisions: considerations for applications (top-down vs bottom-up), scalability, and flexibility.
In the last 12 months, there has been a remarkable leap in scale, evident not only in the size of deployments (measured by number of clusters/nodes), but also in the heightened application I/O and storage consumption, driven by larger and more diverse workloads.
Performance Enhancements for Databases
Databases stand out as a crucial workload for many customers constructing Database-as-a-Service platforms on their chosen infrastructure. Portworx Data Services (PDS) has garnered significant interest, with customers drawn to the straightforward deployment of single-click database setups supporting popular data services. In our collaboration with clients, we’ve meticulously examined I/O patterns, identifying optimization opportunities through I/O profiles for Portworx volumes.
We are very excited to announce that with Portworx Enterprise 3.1.0, we are announcing a new “journal” I/O profile. Database workloads send a large number of flush requests to the backend storage device. The usual workflow for a database like PostgreSQL is to send a couple of async writes and a fsync request. The application also does not send any new write requests until the flush request has been responded back to. A FlushRequest is expensive as it waits for all the previous writes to be done. If this is happening very often it becomes detrimental to the overall IO bandwidth that the backend pool provides. The solution is to log the modify operation along with the data it sends down to the raw NVME device and then reply back to the write operation. Then a flush operation can be replied back without flushing the data down to the backend pool. This amortizes the cost of the flush operation across all the writes bringing it closer to native raw disk performance. We anticipate significant performance gains for specific configurations – you can see more details and testing results in this blog. Be sure to review the prerequisites to leverage this new capability.
Stability and Pure Storage Integration Improvements
While scale and performance have been on the forefront, the other key dimensions remain overall stability and integrations. Portworx continues to expand on that front with new Kubernetes distributions like Mirantis and Charmed. We have worked with these partners and will have certified versions on our documentation updated soon.
On the storage infrastructure front, this release adds another key feature to Pure Storage FlashArray support. We are enabling “raw block access” for RWO volumes on FlashArray as an early access feature for some workloads that require direct access to block device without a filesystem layer. Building on the optimizations that were released in 3.0.x updates, this release adds performance improvements on FlashArray and FlashBlade REST path, reducing the number of API calls up to 75% under heavy load. This hugely impacts the average time taken to schedule FlashArray direct access volume pods on overloaded clusters at scale, thereby reducing the strain on the backend storage device.
Control Plane Enhancements
Volume placement strategy (VPS) is one of the most used features in Portworx Enterprise. Using a set of rules in the CRD specification, users can specify where to place volumes and replicas subject to affinity and anti-affinity parameters for increased fault tolerance. In 3.1.0, this feature has been expanded to allow users to create dynamic VPS specs using PVC labels, giving greater flexibility in volume placement options. Below are some of the other key enhancements on the control plane:
- Pool expansion now allowed for all CSI providers
- New StorageClass definition that can be used to ReadWriteOnce volumes only
- Pool deletion supported for vSphere
- Improved CSI volume creation time
Licensing and Support Changes
Portworx 3.1.0 has one key change to licensing. Starting with this version, PX-CSI license will only allow Direct Access volumes and not Portworx virtual volumes. This impacts only FlashArray and FlashBlade customers using PX-CSI, and existing applications should not be affected upon upgrade. Please reach out to support if you have any questions about this change.
Finally, the 3.1.0 release:
- Expands the matrix of Kubernetes versions and OS/Kernel support
- Includes several bug fixes and enhancements driven from customer feedback
- Will be our next extended maintenance release detailed in our release lifecycle policy
In the fast evolving Kubernetes ecosystem, we encourage customers to move to this latest version as part of their next upgrade to stay current on functionality, stability, and security.
Share
Subscribe for Updates
About Us
Portworx is the leader in cloud native storage for containers.
Thanks for subscribing!
Prashant Rathi
Director of ProductsExplore Related Content:
- Feature
- Performance
- Portworx enterprise