image
What pain points led T-Mobile to Kubernetes? How did they scale to prepare for the seasonal spikes in demand around new smartphone launches and holiday season? What are they using to hold the data?

Find answers to these questions in this interview with James Webb, Senior Technical Leader at T-Mobile, and Venkat Ramakrishnan, VP of Engineering at Portworx, by Alex Williams and Joe Jackson of The New Stack.

TRANSCRIPT:

Alex Williams: Hey, it’s Alex Williams, The New Stack here at KubeCon + CloudNativeCon in Seattle, Washington, and it’s day two and we have with us some guests who are gonna discuss a little bit about the holiday season I think and how you prepare for it. Joining us is James Webb of T-Mobile. Hey James, how are you?

James Webb: Good. How’re you doing?

AW: And we have Venkat Ramakrishnan, VP of Engineering at Portworx. Thank you very much.

Venkat Ramakrishnan: Thank you.

AW: And Joe Jackson, our managing editor and co-host today.

Joe Jackson: Hello.

AW: So James, why don’t you tell us a little bit about what you talked about today in your discussion for people who weren’t there and I know it was about how you prepared for the retail season. Maybe you can give us a summary of why you organized what you’re talking about?

JJ: This is a talk at the KubeCon 2018.

JW: So the talk was just basically, we’ve been running large container platforms at T-Mobile for about three years now, so what we found is that one of the biggest demand times obviously is two things really. One, it’s when new phones come out from some of our partner suppliers for the October timeframe and then weekends into the holiday season. So the demand on our systems goes up about threefold, so we size our systems and scale up to meet that demand over the course of the winter. So with our Kubernetes offering, we knew it was gonna be new offering, we weren’t sure what demand was gonna be. We didn’t know how much load would be on the system. So we just kind of during the talk, we talked through, actually we came to the conclusion, when I wrote the abstract for the talk, we thought we’d have a lot more demand on the system than we did.

So that was kind of a lesson learned is, planning wise, we weren’t quite ready yet. We weren’t quite ready for it. We have the platform ready, our customers weren’t quite ready to adopt yet. But a lot of our customers were running on our other cloud native platforms and on those we scaled appropriately for the holiday season. What that looks like is, and it’s a great thing for the customers on the platform is instead of them having to go out and request new infrastructure and install it from the ground up, they just spin up more containers. So the talk just covered like our background in Cloud Foundry and Kubernetes, and how we were trying to make our Kubernetes environment look like our Cloud Foundry environment and that’s the model for handling load, both normal everyday and holiday.

JJ: Terrific. And so, you’re running this Kubernetes deployment in-house?

JW: We are.

JJ: Alright.

JW: We’re crazy I know, that’s what everyone tells us.

JJ: Nice, nice. So what was the choice with, if we could step back a few years, what was the choice? What’s the pain point that led you to do Kubernetes in the first place?

JW: So, Kubernetes is a 2018 initiative. What drove us to containers in the first place was just simply that, in order to get code from the development environment into production took up to six months and 72 steps. So we knew that was a problem. The IT department was not able to move as agilely as our business was asking us to. They would often really press for solutions and in some cases even look outside for solutions to be provided. So we had to go back and start from scratch and figure out how are other companies delivering very quickly and one of the things that we found was, right, is not just DevOps but having platforms that enable DevOps teams to actually deliver same day changes, the planned deployments 24 by 7, smaller changes instead of large changes once every four months, it’s small changes every day, what were the platforms that enable that and Cloud Foundry and Kubernetes were the two that seemed to really stand out. But we feel like Kubernetes has won the scheduling war in terms of Docker containers, so that’s why we chose that and now it’s just a matter of how do we bring it to the enterprise.

JJ: Nice, nice. And so the applications, do they have to… Both Cloud Foundry and Kubernetes is good for scalability as you point out, Kubernetes in particular, but did you have to change the way you were developing applications too?

JW: We did. And it’s been a big change for our development community, so they’re still trying to figure out how to use these new tools. We have a lot of help where we bring in, it’s one of those things we bring in some consultants that from industry to help us refactor our tools and refactor our apps so that they fit on these platforms a lot better. But it’s interesting because there’s resistance at first, but then once they’re on the platform, they love it. So, we remove a lot of operational overhead from them instead of them having to maintain infrastructure, they get to write code. So, that’s the real key of the platform is that developers get to develop instead of do a bunch of other things right that they used to have to.

JJ: Nice. So, the developers are using Cloud Foundry to build and maintain the apps and then you’re running them on Kubernetes.

JW: No, Cloud Foundry is a separate platform but essentially not everything runs well on Cloud Foundry. So, we built Kubernetes to cover the gap of containers that don’t run well on Cloud Foundry. And we know there’s a lot of them out there. It’s databases, stateful applications and vendor supplied containers. So, things that are non-native to Cloud Foundry. So our current Cloud Foundry implementation’s not even a small set of our total application portfolio. So, we believe that Kubernetes is gonna surpass our Cloud Foundry implementation over the next year, year and a half.

JJ: Nice, nice. So one of the things we’re very curious about is when you go into these production level deployments, Kubernetes, is the networking component of it… In terms of how do you get the networking part right? And so this is, I guess, Portworx had a hand in this, or…

JW: Portworx actually helps us with our storage.

JJ: Okay, storage?

JW: So, one of the key components for Kubernetes offers is the ability to persist data in our containers, and it’s tricky, with containers it’s tricky because you’re having to present storage through several abstraction layers.

JJ: Right.

JW: You’re having to manage it, you’re having to manage it and we really are looking to put critical data into our Kubernetes environments. So, we looked for a partner that would provide us a stable performing environment and Portworx checked all those boxes. So, we’re very happy to have that deployed in our systems.

AW: So maybe, you can tell us about the overall architecture and how Portworx fits into that. So I’m curious on how you think about that substrate?

JW: So, we followed somewhat of a cloud model when we did our infrastructure deployments on-prem, where we would create 3 AZs for what we’re calling a region. We used to call a pod, but then pod created too many naming conflicts with Kubernetes. So, the idea is that an individual rack is a shared-nothing rack. It shares no infrastructure with the… It shares no infrastructure with the other AZs. So that’s great for Cloud Foundry where things are stateless, and everything’s replicated between AZs at an application layer. We wanted to be able to provide to our Kubernetes customers a data layer, where they didn’t have to worry about replication where we could give them… We could say, “You come to the platform, make a storage claim, and we will give you a single AZ volume depending on your data needs or we’ll give you a replicated volume depending on your data needs.” We didn’t want them to have to worry about whether or not their data was secure.

AW: We have a little, just get it…

[background conversation]

AW: Yeah, we’re gonna have this going through the stream, but go on.

JW: That’s fine. So it’s about… We’re looking to have at our initial pass at providing Kubernetes environments for our customers is we’re trying to give them more curated environments, where again the efficiency is them developing not them worrying about whether or not their data’s safe. So that’s the idea with giving them an environment where they can just onboard to the environment, request space and they know that it’s replicated and that we’ll support them.

JJ: Alright, so when you say customers you don’t mean T-Mobile’s end customers you mean…

JW: Yeah…

JJ: Oh, you do?

JW: So we, T-Mobile and I know other companies do this too… We have big C customers and little C customers. Big C customers are customers with phones. Little C customers are internal IT customers, right?

JJ: Oh, okay.

JW: We are… When we say customer, we mean internal IT customers…

JJ: Gotcha.

JW: In most of those cases.

JJ: Gotcha, terrific. So the storage problem is, of course, Kubernetes moving containers around best suited and so, but this is problematic because the storage for the application has to be at the same place in terms of, I guess an IP number or something. And so the Portworx handles translation I take it?

JW: You can speak to that better than me.

VR: Yeah. So just to step back a little bit, Portworx is built for use cases like what James described about it. How do you build cloud native data services on any infrastructure? It could be your own data center where you’re building multi-AZ services or you’re running and onboarding your apps on a public cloud, right? So what Portworx does at a very fundamental level. It enables deploying highly available services on your container orchestrator now, Kubernetes being the most dominant now, but we have supported multiple different orchestrators in our industry, right? But it enables you to deliver these highly available services on top of Kubernetes and provides the data storage layer for Kubernetes. So it enables you, for example, a container gets scheduled, a pod gets scheduled in one of the nodes, and for some reason the node goes down, there’s a network partition happens. Then the scheduler would reschedule that node for whatever reason the scheduler decides that that part has to go to a different node. And Portworx automatically manages the availability of the data for that part in any other node that’s in the Kubernetes cluster. It’s highly transparent to the user, for the operators at the infrastructure. At the same time, it provides… We provide a level of abstraction for the customers of someone like James. So, they can actually specify the kind of characteristics they want from the storage volume that’s getting provisioned.

AW: So is the storage kind of carried… Do you carry that with the container that persistence then, is that how it works?

VR: So, the storage for that container is made available through Portworx’s data storage layer but the data itself can be available in any of the nodes that are part of that volume quorum or it can be remotely available as well as.

AW: So it’s called, the data can be called then.

VR: Yeah. The data can be called. Yes.

AW: Right. And so, maybe you could tell us a little bit about the architecture underlying that.

VR: Absolutely. Yeah. So the very fundamental level Portworx is a distributed block storage layer, right? It’s distributed blocks and it’s a distributed control plane as well, where you don’t have any centralized name node management or metadata management. The metadata and the data is highly distributed across the cluster. We support things like container granular volumes. So each container can be given a volume and you can offer other volume and data services like snapshots, encryption, replication, and even backing up to a cloud, or migrating data across clouds, per volume per container. So that’s the underlying fundamental architecture that’s built from the ground up as a distributed block store, a distributed control plane, and an autonomous policy engine, and it’s also cloud and application of it.

AW: So how does that use case then apply… How does that apply for T-Mobile? How are you using the…

JW: So the first use case is just make the data available to a local cluster so that sort of table stakes is we want to have a reliable volume service available for customers to use and that’s… Coming from a UNIX background, like managing your storage. The tools to manage your storage are incredibly important. So Portworx has an impressive suite of services that really help you control what you offer your customers, how it’s managed and it gives you a good visibility behind the scenes into what it’s doing on top of it. So that’s table stakes, right? It’s like, I want storage for this single cluster. Next is I want to deploy my application to three clusters, I wanna keep data in sync across three clusters. So how do I do that? There’s one way to do that is put the burden on the application team at the application layer to replicate data. Some things do that natively no problem, they can do that. We’re not gonna block that. Other customers will come and say, “I don’t wanna worry about that, I don’t wanna worry about the mechanism for that, I just want it to happen.” So they can request that and we can provide that service initially somewhat managed for them, and ultimately the idea is we do it. We do it in a self-service fashion, we let them manage where their data is and where they’re running their workloads.

JJ: Terrific. Hey, let’s talk search traffic. I remember last year at container days, an E-commerce site, I think it was a shoe company, they’re talking about how they’d run these promotions and you’d have to get to the site at 12:00 in a certain time. Then, of course, their traffic spikes by 1200% or something like that. But your situation’s more seasonal. How do you… And you had mentioned that sometimes you get the anticipated traffic, sometimes you don’t. Can you talk a bit about your scaling mechanisms and…

JW: So most of the platformers runs provide an auto-scaling mechanism. The problem is, is that most auto-scaling mechanisms work on a one-minute interval, so what we found is that that cannot keep up with the demand in load. It’s like phones go on sale at midnight, at midnight load jumps within two or three minutes by 10X. So what our teams do… What the platforms have allowed us to do now is platform teams will proactively go in scale-up their applications to what they think they need three or 4X of what their normal deployment is, and usually, that’s enough buffer. So that on that initial bump if they need additional resources auto-scheduled, it can pick that up. But usually, we cover what the… That initial hockey stick, the pressure that puts on the infrastructure. And what we found now is that most of that is… Where that’s a problem still is in legacy infrastructure components that have not moved on to the platform. We still hit that problem, but it’s usually on platform we’re usually able to deal with it, off platform that takes a little more work. So…

JJ: Nice, nice, terrific. So no now Kubernetes is still a relatively new technology. There must have been some challenges in terms of putting it into production. Maybe could you…

JW: It’s both challenges on our side and on our application team’s side. It’s a new technology, we’re not familiar with it as much as we should be, we’re getting a lot of help, again, from partners that have a lot more experience in the field. We are a big Pivotal customer, we deploy Pivotal Container Services and they actually are very helpful in getting our workloads up and running. So that’s kind of one of our initial… Rather than have large teams that back end engineer to figure out how to deploy the clusters, we just get the clusters on demand from them and we can focus on above the waterline value problems which is how do we get our apps into production?

AW: So Venkat, what are some of the services that you provide T-Mobile? For example, James was mentioning these services in particular, that it had come with the Portworx platform, part of the Portworx technology?

VR: Yeah. I mean so, some of the services that James’ team uses at the storage services level, like Portworx offers like distributed block, distributed file, distributed object, right. And what we… How we enable that is, an application spec in Kubernetes can typically come and specify what kind of data service that they need from Portworx, because Portworx volumes are natively supported in Kubernetes distros through data chip, right. So they can specify what kind of service they need and when that app gets scheduled it can programmatically query that service from the Portworx data storage layer, right. And we enable creating that service, creating the service endpoint, like which would be a volume or a file mount or a REST endpoint and then make it available for that application in the node that the app gets scheduled, right.

And from that moment on, there is a relationship that gets established between the app and the data and the volume for that app. And then Portworx keeps track of where that part gets scheduled or an app gets scheduled again, how to make the data available and automates all of that underneath. So for a large-scale cluster operator, they don’t need to worry about, you know, ” How do I move my data, how do I migrate my data, how do I ensure that the infrastructure they have can… Is resilient to failures in nodes or across the racks and all of that,” and Portworx enables that. And Portworx also does cross availability zone replication. So we are topology aware, so when you deploy Portworx on multiple racks and if you… Once we discover the topology then we can place the data across different availability zones, so we are… We make the entire infrastructure tolerant to any failure in a single rack or a single data center, right. So all of that services kind of wraps, gets wrapped under a very simple interface to consume and it’s a very programmatic interface that’s declarative. Like someone can come in, like in real human-readable language, can specify what kind of characteristics they want from their underlying infrastructure and with help of Kubernetes, Portworx makes it happen for them.

JJ: How does Portworx itself, the software itself, avoid the single point of failure problem? It sounds like you guys are running a distributed…

VR: It’s a great question, it’s a great question. So Portworx again, it’s built based on… It’s a distributed control plane. We do not have a name-node concept, so there is no single node or point of control where failure in that actually gets the cluster down. And then we use these distributed protocols like Gossip and gRPC for communication across the cluster. We Gossip across the cluster nodes to make sure that we understand the cluster membership. We use Raft, which is another cluster consensus protocol. So combining all this together, everything wrapped under a distributed control plane, helps us build a no-single point of failure kind of distributed cluster model.

AW: Isn’t etcd very similar to Raft?

VR: Etcd is one of the Raft-based servers, they get console Etcd and you have a whole bunch of other things.

AW: So you use a Raft-based…

VR: We use a Raft-based, a Raft-inspired…

AW: So Raft is one of your own that you…

VR: We have taken some… We have made some modifications to whatever is freely available and we have kind of fine-tuned it for our own use case.

JJ: Now James, what are you actually using to hold the data. Are you doing NoSQL? Are you doing SQL? Are doing flat files or?

JW: It’s up to the application teams. Some… It just depends on what they need. For messaging cues, well some teams are using Rabbit. Teams that just need like simple data services they deploy a simple database they’re very familiar… We have a lot more familiarity with… ACID-compliant databases, then we do NoSQL stuff at this time, but teams are starting to investigate that. So right now, it’s kind of what the application team brings. Longer-term we’re looking to offer managed data services, where we’ll offer a suite of… Basically operator-based, where, you need a data service you can just hit a service broker and it’ll spin up an instance for you to provide that data, right. And it’s back to the same thing, this is again where the storage is going to be very important is when we start… Right now, most data on the platform it’s just simple like, is caching data, right? If something happened, we can recreate that data by going to the source of record, right. But we’re looking to promote source of record data to the platform. So we’re gonna do that cautiously but we’re trying to build the platform to be resilient enough to do that, right.

JJ: So you’re looking to minimize the cache layer then or the opposite?

JW: We’re looking… So, in that case, there would still be a caching layer. At this point, we’re just… We’re looking to start pulling data out of these large old monolith applications, right, and distribute it where it’s closer to the application and built in a more cloud native way instead of a single database that’s running in an active/passive failover mode.

AW: What databases are you running internally then?

JW: I don’t like to talk about that [chuckle]

[laughter]

AW: Okay, that’s fine.

JJ: He doesn’t want to talk about them.

AW: We’ll edit that out. So in any case when we’re talking through these approaches, how does this compare to your older kind of, the older ways that you used to do this?

JW: It’s night and day, right. It’s we are managing resources within a platform instead of managing… Instead of managing VMs and VM clusters, right. Instead of them building a VM and then installing a bunch of agents on it and then handing it off to this team for monitoring and handing it off to this team for backups and handing it off this team for something else and then handing it to the app team and then they install the application, right. The way we were modeled was, those were all the spare groups, there’s a lot of lag between each of those services so it could take you weeks to get a VM, right. That’s for VM, it could take you weeks. We’ve had customers that we can reach out to us, give us their requirements, we onboard them that same day. They can push an application to production that same day, right. ‘Cause we build in all… All the resiliency is already there, all the load balancing is already there. There’s an SSL cert that’s already there, if they’re using HTTPS, right. It’s… So this has had a huge benefit for our application development teams. And not just our app… Not just development teams but frontend… The… Actual operations teams as well because they used to have to worry about that as well, now they can actually work closely with their development teams and focus on functionality, not on infrastructure maintenance.

JJ: So focus on the business logic?

JW: Exactly, yep.

AW: It’s interesting. When you think about this, how do you think about your team composition and how do you develop your teams?

JW: It’s changing and that we kinda all come in at the talk where when we were first ramping up our team it was hard to find… Like we just opened some job reqs and no one was… We were looking for very specific experience with Cloud Foundry or Kubernetes or… And those folks like, nobody… Very few people applied and what we’ve come to realize is that the people that have those exact skill sets are very valued by their company and that aren’t really usually looking at this time, so we turned internal and started hiring some of the folks on our internal teams, whether it be from our UNIX team, our Windows team, our storage team, right, people that are interested in where we were going and we looked more for aptitude and attitude versus, did they have a specific skill with the assumption that we could train them up and… Or they would train themselves and that’s happened, right. It’s we have a couple of guys who will come in and, you know, come in in the morning and then go home and come back in the morning and you can tell that they were at their queue very late, because they were just looking to solve a problem and I don’t want people working outrageous hours, it’s nice to see that somebody sticks with the problem until it’s solved and but it’s… We’ve got a great team and I just couldn’t be happier.

AW: That’s great.

JW: Yeah.

AW: That’s great. So when you’re thinking about customers out there, like T-Mobile, do you see a lot of changes in how they’re thinking about their teams as they start to work with you?

VR: Yeah, absolutely. I think one of the things I’ve seen is how there’s a very good separation of, you know kind of crystallizing the responsibilities, there’s a team that manages infrastructure, that provides the infrastructure services, that can onboard different apps and then you’ll see a lot of teams where the developers are consumers of that infrastructure. There could be internal customers, internal developers or people who are actually developing apps that are either for internal use or they’re kind of using that infrastructure to production-test their apps and then roll it out into the cloud for running in production, right. So you see that there’s a pretty good separation of responsibilities that’s starting to come up. And for someone like Portworx, and especially in the Kubernetes community, we are talking to almost two sets of audiences. There are people who operate the infrastructure and they have a specific set of requirements they wanna have… How they… They have specific requirements on how they build infrastructure, how they manage it, what kind of monitoring requirements they have and the compliance and regulatory things that need to be taken care of. And then the developer community that are consumers of that infrastructure and what they want, how much power they want from their underlying data services, right. And how do you… So, what Portworx is built for, from the ground up, was to be able to service both of these audiences.

AW: Right.

VR: And that’s how we’re seeing like the… How the community is evolving.

JW: And those audiences are converging, right?

VR: Exactly.

JW: With the platforms now, we are closer than ever with our customers, right.

VR: Yes.

JW: And we have to be, because they’re… Part of it’s just because we’re learning together, right, how to operate in the cloud native world. But the backs… Back, when I… In my UNIX past, a lot of times you would just build servers, hand them off and never hear from your customer again until they need… You had to tell them you were patching their server. That just not the case anymore. It’s a much more interactive community. We like to bring our customers together and what we found is our customers have actually gotten to the point now where they’re answer… We have a Cloud Foundry Slack channel and a Kubernetes Slack channel and often a customer will post a question and another customer will respond, right. And that’s fantastic behavior, it’s really, it’s…

VR: I think it’s the right thing. I agree with you.

JW: Not only is it great to work with your peers, but it’s also great because we don’t have to answer every question. We’re not on the… It’s fantastic.

VR: I agree with what he just said. I think the community, they love Slack. The community of how the other developers are working with each other, and how they are helping build each other’s infrastructure and the apps they need is great. The other thing is like never before, I think the app developers, the developers have a lot more control over the infrastructure. So it’s becoming a much more of a team effort as opposed to, I’m filing a ticket and then it gets serviced God knows when, and then they come back, or somebody… Someone who deploys infrastructure and who consumes it, they never have an interaction for a long time. But instead, things get moved very fast in an agile manner, dynamically tuned, and that actually also helps build and test, and run apps in production much faster than before, and that’s what the new workflows really enable.

AW: So you run Kubernetes internally there on-prem, and what do you see? You’ve come a long way alright, in terms of your infrastructure. What are some of your goals for next year? Are you starting to think through, like what would you like to improve upon? What do you wanna build upon?

JW: It’s adoption. Now that we’ve built this platform, we’re hoping that we’re gonna get customers coming in, and we know we will right, so that’s the… It’s the way that it feels like technology adoption at T-Mobile happens. You either kind of flatline and then drop off as people move on to new technologies, or you hockey stick right. I feel like we’re just right at the beginning of the curve change in the hockey stick, so it’s gonna be onboard, the next thing is looking at extending Kubernetes with service mesh. Istio, Envoy… Looking at the operator landscape is just starting to emerge. Federation of clusters where an app team can just push an app and it runs in multiple clusters automatically without them having to do anything, so it’s just gonna be keeping up. The technology’s changing so fast, it feels like just keeping up with it’s a job all by itself, much less translating what we learn into our on-prem offering, and then also being able to help customers understand what we’re pushing there. But again, that’s where we’re looking at our customers to help us, “Hey, we enabled these features. You help us figure out how to use ’em” and figure out where the gaps are in what we’re providing,” so…

JJ: Are you working… Are there any internal projects, you mentioned the operators. Are there any projects going on that you might consider open sourcing one day?

JW: Yeah, so we also have a public cloud team, and they also have a Kubernetes offering, and it just depends on again, back to internal customers. Some customers wanna be closer to the internet itself and some of us wanna be back office and facing internal call centers and retail stores, and that kind of thing, so they are looking at building some abstraction layers. So I think that’s one that… That’s been a theme of the conference here actually, is should App Devs need to use kubectl, or should there be an abstraction plane above that? And they’re trying to solve that problem. That sounds like a lot of companies are doing their own internal development and doing their… Or are looking to the community to help solve those problems. So our internal team is looking at building a product and open sourcing it. We’re looking at Knative to solve that problem, but T-Mobile’s changed in the past few years where five years ago, it felt like you mentioned open source and you got… Not got ignored, but we weren’t ready for it yet. Leadership realizes now the value of not just in terms of the value we get from the open source community, but the value we get from in terms of recruiting visibility in the community, what… We get a lot more than we put in, so that’s changed and now we actively contribute, so it’s…

JJ: Nice. Nice. We’re certainly hearing a lot more about managed Kubernetes environments for developers. You don’t wanna give ’em, I guess, the keys to the hot rod without… [laughter] Some, I guess, guardrails in place.

JW: Yeah. In a way, it’s a step back, because now they have access to all these infrastructure primitives again, and it’s not that we don’t trust them, it’s just that it just creates more ability for them to do things in a non-standard way. This team might do something differently from this team. We’re trying to get folks to emerge to, or to converge on a very similar set of patterns, and the easiest way to do that is give them a common abstraction layer right to deploy their apps.

AW: So what was, so just in conclusion, what was the hot topic that you guys were hearing here? What were some of the conversations that you had with people that were… That you remember, and that will go with you?

JW: Actually what we just talked about it. I think an abstraction layer is gonna be a very… It’s gonna be a big focus for not just us, but for a lot of teams in the next year. Operators is another big thing. Knative, which is back to the abstraction layer is another really big thing. So it’s a lot of really interesting projects, yeah.

VR: Yeah, as James said, to abstraction layers is pretty… But like ability to kind of abstract different infrastructures, different cloud infrastructures, and kind of make multi-cloud a consumable resource, and how do you orchestrate apps, and move apps between different cloud infrastructures seamlessly? I think it’s becoming, I heard that a lot here. When we were describing what we just launched, which is PX-Motion, which is a multi-cloud data orchestrator. When we were describing that to customers, our customers have heard about it, who came and told us that, “Hey, tell me more about it.” And the moment we would say, “We enable you to move or run apps in two different cloud infrastructures and to be able to move data across data centers,” they’re like, “Exactly what I was looking for. And I think with Kubernetes is much easier, and tell me more about how I can make it more programmable, how I can use it in my infrastructures.” So just from that standpoint, I think this KubeCon, just talking to everybody and the customers and the partners here, I think hybrid multi-cloud is a reality, and Kubernetes is accelerating it right. And that’s what I take away for me from this KubeCon.

JW: Alright, yeah, actually this is pretty amazing the KubeCon here. The attendance is amazing, enthusiasm is amazing, the just amount of companies that are attacking all these edge case problems and really, you go and all of them have value. You see the problem that they’re trying to solve, and it’s gonna be very interesting to see how it all shakes out over the next year. It’s like I hope… It’s a brand new ecosystem, right?

VR: Absolutely.

AW: Well, it’s keeping us all busy isn’t it?

JW: It is.

VR: It is.

AW: Yeah, well, thank you very much for your time. James, good luck with the holiday season. It’s here, we’re right in the middle of it, how’s it going?

JW: We’re pretty much through the biggest days are almost behind us. We have one or two more weekends, but it’s you know Black Friday, Cyber Monday and then yeah back in October when…

AW: And Venkat thank you for joining us…

VR: Thank you. Yeah, thank you.

AW: I appreciate your time and Joe, another good interview that’s so appreciated and we’ll talk to you soon.

VR: Exciting times. Thank you.

Share
Subscribe for Updates

About Us
Portworx is the leader in cloud native storage for containers.

ky

Kateryna Ivashchenko

Portworx | Manager, Demand Generation
link
MySQL-GKE
October 9, 2018 How To
Kubernetes Tutorial: How to Failover MySQL on Google Kubernetes Engine (GKE)
Janakiram MSV
Janakiram MSV
link
Blog Placeholder
October 18, 2018 How To
Kubernetes Tutorial: How to Failover MongoDB on Google Kubernetes Engine (GKE)
Janakiram MSV
Janakiram MSV
link
GKE
October 18, 2018 How To
Kubernetes Tutorial: How to Deploy MongoDB on Google Kubernetes Engine (GKE)
Janakiram MSV
Janakiram MSV