BlueXP Blog

A Closer Look at Kubernetes Management & Orchestration Services

Today, Kubernetes is used across organizations around the world to deploy and orchestrate container-based workloads. But how to operate and manage a Kubernetes infrastructure at scale is still a heated discussion.

Michael Shaul, the NetApp Principal Technologist, sat down with Bruno Amaro Almeida, independent Advisor/Architect and frequent NetApp BlueXP blog contributor, to discuss the challenges involved in operating Kubernetes architectures and how managed services are changing the status quo.

In this article, they share their insights and points of view on Kubernetes management and orchestration services.

Michael Shaul is NetApp’s Principal Technologist. Based in Israel and with a long career working with data management and infrastructure, Michael is part of the Cloud Data Services CTO office and has a unique in-depth perspective of NetApp cloud technologies.
https://www.linkedin.com/in/mickeyshaul/

Bruno Amaro Almeida is an independent advisor and architect.  Based in Finland and working in the areas of governance, cloud, security and data engineering, Bruno has a balanced perspective coming from his experience helping organizations on both CxO cloud and data strategy and hands-on engineering execution.
https://linkedin.com/in/brunoamaroalmeida

Read on below as they answer:

Or watch the full interview video here:

What is your advice to engineering leaders on adopting Kubernetes and its operational overhead?

Bruno Almeida (BA): Kubernetes is often linked with (and feared for) an additional operational overhead it takes to manage. For engineering leaders who are not familiar with Kubernetes, that creates a certain level of reluctance about adopting it at scale.

Michael Shaul (MS): Kubernetes can indeed be quite complex to install, configure, and operate. However, it truly depends on what your use cases are and what you need to do with Kubernetes. As an ecosystem, Kubernetes consists of multiple building blocks that can be arranged and customized exactly as you need to fulfil your business goals.

As a plug-and-play platform, you can replace pretty much anything in Kubernetes, from DNS and storage services to network interfaces. But that level of customization can add a lot of pain in managing Kubernetes: compatibility tests, check releases, upgrading components, among other operational tasks.

However, for the vast majority of customers, a managed Kubernetes service is enough and will lower the adoption threshold when it comes to maintenance, upgrades, installation, and other operational factors. Nowadays there are managed Kubernetes offerings from all different vendors and environments, from on-premises to public cloud providers. It’s a bit similar to clothing, purchasing a one-size fits all t-shirt versus getting a custom-made tailored suit.

Which managed Kubernetes service is leading the way? What are the differences between the most popular ones?

MS: It’s hard to say which one leads the way. There are great managed Kubernetes services out there. Good examples are Red Hat OpenShift, Amazon Elastic Kubernetes Service (EKS), Google Kubernetes Engine (GKE) and Azure Kubernetes Service (AKS).

Each service is custom built to its provider and aims to make it as easy and convenient as possible to operate a Kubernetes cluster in that given public cloud provider. From those example services, OpenShift has a slightly different approach, offering both on-premises and cloud (on top of providers such as AWS and Azure), and complementing Kubernetes with additional capabilities to ease off the application lifecycle management.

Managed Kubernetes services naturally have their own set of limitations. These are certified environments and thus, most configuration options are closed. A few of those managed services give the ability to install your own plugins, such as the container storage interface (CSI), and change some settings but the options are fairly limited. There are however some perks to it. In addition to a lower operational overhead, managed Kubernetes services provide unmatched integrations with other existing services within the cloud provider.

When there is a more advanced or complex use case what we wind up seeing are customers leveraging managed Kubernetes services alongside deploying self-managed clusters. It’s always a tradeoff between the amount of control you need versus the operational overhead you are willing to take. Finding the balance between those two is truly important.

What about other managed Kubernetes services such as NetApp Astra, Azure Arc, or Google Anthos?

MS: There has been a multitude of new services related to Kubernetes popping in the market. As the Kubernetes ecosystem continues to grow, and cloud becomes a dominant environment across organizations, this is an expected consequence. Some companies and services, such as Red Hat and NetApp, were born on-premises and later extended to the cloud with services like OpenShift or NetApp Astra.

On the other hand, the opposite also happened, with Azure Arc and Google Anthos leveraging the Kubernetes built-in capabilities to bring workloads from cloud to on-premises. Both Anthos and Arc enable organizations to extend their computing needs and manage the entire application lifecycle from the public cloud providers to other environments such as on-premises, edge, or third-party cloud providers.

We are definitely seeing a more hybrid world, where environments are becoming extended behind their traditional borders. NetApp Astra is a good example of a service built for this hybrid and multicloud world with the way that it handles the data protection and lifecycle management layers. As data moves across providers, regions and environments, Astra simplifies the data compliance and governance of those Kubernetes-based applications.

New and existing managed services are having more and more native integrations with Kubernetes, thus expanding the ecosystem and complementing existing Kubernetes capabilities.

What can we expect in the near future? Are there other container orchestration options worth keeping an eye out for?

MS: There are a few other container orchestration options, such as Hashicorp Nomad and Docker Swarm. Yet, I can’t really say that any other project can rival Kubernetes. They bring some interesting capabilities, but mainly exist as a niche. It will be extremely difficult for any of those projects to gain major traction and adoption in the foreseeable future.

The Kubernetes ecosystem is quite vast but as a near future trend, I definitely see the abstraction level going up. Layers on top of layers striving to make any operational overhead minimal to non-existent.

A great example of this trend is with machine learning. Kubernetes has been gaining popularity for machine learning operations (MLOps), yet we can’t expect data scientists to learn Kubernetes. In fact, they don’t even need to know what Kubernetes is. Their focus is in training machine learning models, deploying applications, validating experiences, among other tasks. Kubernetes is powering a lot of these tasks in a layered and transparent way to users, with a lot of managed service providers abstracting things away from the people who are using those services.

However, there is also another extreme to this scenario. We see providers making it “less transparent” and giving a lot of granularity and flexibility to users to extend their container orchestration across any sort of environment: cloud, on-premises, or edge.

BA: Managed Kubernetes providers are definitely putting their focus on providing the same smooth cloud native developer experience regardless of where the workloads are going to run. One could say that the Kubernetes ecosystem is expanding, for both the ones that need to tweak all the nuts and bolts and the ones that end up leveraging Kubernetes capabilities without even knowing they are using it.