In working with building infrastructure for large data centers over the past 15 years, I’ve observed that:

Infrastructure provisioning today is done distinctly and separately from application provisioning. For example, storage is provisioned first, LUNs are carved out and then attached to a host, or VM, and finally application is deployed. This complicates deployment scripts, necessitates the need for infrastructure orchestration tools such as Chef, Puppet and creates a static environment where “adds, moves and changes” of the infrastructure are complicated.

If an application needs storage capacity, it should be able to self-describe the capacity it needs, and dynamically and programmatically change the requirements. Provisioning should not be ticket based or done by way of people getting involved. With Docker, an application’s environment, its libraries and dependencies are self described and self provisioned. Then why stop there? Why not go all the way and self describe the runtime resource and infrastructure requirements?

I’ve also observed that the physical separation of where the resources (such as storage) are offered from, and the consumer (the application) creates additional headaches around managing the connectivity of the two. That is, if I have to manage a protocol like NFS simply so that it is a link from my storage to my application, well that’s another construct I have to manage in the data center. I’d rather prefer a model where the application somehow directly talks to the storage and manages it in its own preferred way, not by going through a protocol that itself needs to be administered.

Containerizing an application and running it in an environment that understands a container’s infrastructural requirements goes a long way toward solving this problem.

Docker does a great job in encapsulating an application’s environment. What if I could add to that packaging, at app run time, some description of the resources it needed to run? What if I can now have an infrastructure fabric that works with Docker natively and executes to providing the infrastructure as well as the runtime for the application?

At Portworx we’re doing exactly that. We’re creating a software fabric to build an Application Defined Data Center using Docker. To do this we need intelligent software infrastructure that molds to the needs of the dockerized applications but that’s a whole different topic.

Did you enjoy what you just read? Learn more about Docker storage, Kubernetes storage or DC/OS storage.

Subscribe for Updates

About Us
Portworx is the leader in cloud native storage for containers.


Gou Rao

Portworx | Co-Founder and CTO
March 13, 2017 How To
Building Composable Data Centers
Gou Rao
Gou Rao
October 2, 2015 How To
Portworx Working in a Multi-Node Docker Swarm Cluster
Gou Rao
Gou Rao
April 18, 2017 How To
Jenkins Docker: Highly Resilient Jenkins Using Docker Swarm
Harsh Desai
Harsh Desai