Handing off your Kubernetes clusters to a managed service provider can feel like sending your kids off to college—it’s scary at first, but eventually there’s a lot less work to do around the house.
The managed Kubernetes options—or Kubernetes as a service (KaaS)—from the Big Three public cloud providers Amazon Web Services (AWS), Google Cloud, and Microsoft Azure have all made huge strides over the past few years, helping customers run and orchestrate their containerized workloads without having to know the ins and outs of YAML configuration files or to worry about autoscaling, updates, and cluster management.
“When enterprises consider something strategic, the initial inclination is to run it themselves. Then they realize over time as they acclimate that not only is it not giving them any competitive advantage, it is more likely than not the vendors can run it better than they can,” said Stephen O’Grady, cofounder of the developer-focused analyst firm RedMonk. “Is every enterprise going down this route? Not yet, but the appetite and direction of travel seems clear,” he added.
Here are six reasons to consider a managed Kubernetes service.
Lower management overhead
Let’s start with the obvious reason first. “It is less work, let’s be clear,” said Sylvain Roy, senior vice president of technology platforms and engineering at travel technology firm Amadeus. “It is operated for us and that matters, because we have a challenge to have all the people we need to run [Kubernetes].”
Similarly, a small group of engineers at the construction company Strabag have been running containers themselves since 2006, transitioning to a self-managed open source Docker and Kubernetes setup over the past four years. Now the group is looking to automate as much of the cluster management as possible, either by modernizing existing apps and handing off the management of the underlying Kubernetes clusters to Google Cloud or by empowering developers to run new applications in the cloud or in a hybrid setup using the Anthos service, specifically when some on-premises data transfer is required.
“The journey is to hand tasks off that are fit to be handed off,” said Mario Kleinasser, team leader for cloud services at Strabag.
Similarly, at financial data giant Bloomberg, “it makes sense to leverage a vendor when you don’t have SRE [software reliability engineering] teams or teams managing the release cycle of Kubernetes, for those focused on running their apps and don’t want to manage Kubernetes,” said Andrey Rybka, head of compute infrastructure at Bloomberg.
Today, Bloomberg is still running most of its Kubernetes workloads on-premises, but it is also starting to use all three major cloud vendors for managed workloads where appropriate.
You will need fewer experts
Kubernetes management skills are hard and costly to come by, especially when you are writing your own YAML config files. If you have people who can hand-tune a Kubernetes cluster, you will probably want to free them up to manage your internal platform or any particularly important or tricky workloads by handing off the management of clusters for more-vanilla workloads.
“It’s not easy to get and keep people for these technologies, and that is clearly a challenge,” Amadeus’s Roy said.
Better reliability
Simply put, mega cloud vendors are often better placed to manage your Kubernetes clusters than you can do yourself, due to the scale of their engineering teams, their wide lens of customer deployments, and their access to the underlying telemetry of those deployments.
“It is more likely than not the vendors can run it better,” RedMonk’s O’Grady said. “Vendors have the telemetry and the advantage of seeing all of their customers run this, as opposed to a single enterprise only having their own model to go by.”
Take Bloomberg, which turned to Kubernetes in the heady days of 2015, when it was still only an alpha release, before moving into production in 2017 once the necessary continuous integration, monitoring, and testing were proved out. While Bloomberg engineers still largely self-manage Kubernetes clusters for on-premises applications, it increasingly makes sense to use managed options, especially “from a reliability perspective,” when workloads are run in the public cloud, Rybka said.
Don’t worry about upgrades and patches
Upgrades and patches are two of the least enviable jobs for anyone managing their own Kubernetes, which is why the managed providers prioritize taking these tasks off your plate.
“To patch, update, and manage Kubernetes yourself is complex and complicated and is completely undifferentiated heavy lifting,” said Deepak Singh, vice president of compute services at AWS.
Maintaining cloud momentum
For organizations pushing forward with public-cloud-first strategies, adopting more managed services can help boost momentum.
Amadeus recently signed a deal with Microsoft to do just this. “They move fast and we want to benefit from that, so using more managed services is something we will consider every time,” Roy said. “The way I see it, that is the way to benefit from this momentum.”
Now, vendors are converting their knowledge and experience around Kubernetes best practices into more opinionated versions of Kubernetes services, with simplified paths for adoption, like GKE Autopilot.
“Some will see it as training wheels, but I see Autopilot as a seatbelt,” said Kelsey Hightower, a principal engineer at Google. “The car can drive at the same speed but there is more safety by default; it’s a bullet-proof configuration. People always ask us for best practice and what decisions do they have to make; Autopilot gives them that.”
Similarly, AWS’s Singh says that the company is becoming better at taking what it has learned about running Kubernetes at scale and building that “operational posture into EKS ... which allows us as service providers to build that into these managed services out of the box. That is another reason you will see this trend accelerate.”
However, these sorts of services tend to naturally arouse fears of vendor lock-in. “Autopilot is a harder one because I remember being asked the question about why people are focused on Kubernetes as a middleware layer if no one ever switches between vendors. The answer is having the option to say I can walk away,” said RedMonk’s O’Grady. “The more you rely on vendor-specific options, that [ability to walk away] goes away, so that is a harder choice for enterprises.”
It’s still open source, and portable
Managed providers have had to earn the trust of the open source community and of customers who want to be sure that they are consuming a distribution of Kubernetes that is as close to the vanilla open source version as possible, to allow for greater portability and avoid lock-in.
“There was a fear when Kubernetes came out that it was a bait-and-switch, a land grab from vendors to take from open communities and that it would morph into open core. It has taken five, six years almost to disprove that,” said Google’s Hightower.
Similarly, AWS’s Singh says it is important to some customers that EKS stays close to the open source distribution of Kubernetes, “with no weird voodoo going on there that would create differences.” AWS recently open-sourced its EKS Distro on GitHub as a way to prove this out.
Joe Beda, Kubernetes’s cofounder and principal engineer at VMware Tanzu, admits that “it is hard to have this conversation without talking about lock-in.” He urges anyone making these buying decisions to assess the risks appropriately.
“How likely are you to move away? If you do what will be the cost of doing that? How much code rewriting will you need to do and how much retraining? Anybody making these investments needs to understand the requirements, risks, and trade-offs to them,” Beda said.
For its part, the CNCF runs a Certified Kubernetes Conformance Program that ensures interoperability from one installation to the next, regardless of who the certified vendor is.
So, why isn’t everyone doing this?
For large, complex organizations, like Amadeus and Bloomberg, there likely always will be some workloads that you don’t feel comfortable handing off to a managed service provider, whether that is sensitive data security concerns, tricky on-premises dependencies, or overprotective platform teams wanting to hand-tune their own clusters.
“Those who want to self-manage parts will be worried about the data plane; they need to customize or specialize in certain areas. They don’t mind a managed control plane,” Google’s Hightower said.
The reality, however, is that all the reasons to operate Kubernetes on your own are becoming less and less convincing.
“Perhaps you see it as an existing investment that no one wants to write off as a sunk cost yet, or there are conservative organizational concerns about a set of workloads or the business,” RedMonk’s O’Grady said. “Or there is apprehension to have a piece of your infrastructure, which is perceived as strategic, leave your control. But when you see your peers doing it, that apprehension goes away, and you will see more people realizing the benefits.”