More about Kubernetes in Azure
- Kubernetes in Azure: Architecture and Service Options
- Azure Containers: Top 4 Options for Running Containers on Azure
- GKE vs AKS: Compared on Availability, Security, Tooling & More
- Azure Container Instances vs Azure Kubernetes Service (AKS): How to Choose
- Azure Container Instance (ACI): The Basics and a Quick Tutorial
How Can You Run Containers on Azure?
Microsoft Azure offers several services designed especially to help you run containerized applications:
- Azure Kubernetes Service (AKS)—provides managed container orchestration within the Azure cloud. The AKS platform offers functionality that simplifies container orchestration, enabling enterprises to more easily manage their workloads.
- The Azure Container Instances (ACI)—provides managed infrastructure provisioning for containers deployed on Azure. ACI lets you create new containers and then automatically provision resources.
- Azure Service Fabric—provides a distributed system that simplifies the packaging of containers and microservices. It lets you use any language to manage hybrid deployments and provides a lightweight runtime.
- Azure Red Hat OpenShift—a fully-managed version of the OpenShift platform on Azure, which provides features like storage management, image registries, logging and monitoring, which extend the functionality of Kubernetes.
In addition to the above orchestration services, Azure provides tools that you can add to your containerization stack. We’ll cover Azure Container Registry, a private Docker registry service in the Azure cloud.
This is part of our series of articles about Kubernetes in Azure.
In this article:
- Azure Kubernetes Service (AKS)
- Azure Container Instances
- Azure Service Fabric
- Azure Red Hat OpenShift
- Azure Container Registry
- Azure Container Storage with Cloud Volumes ONTAP
Azure Kubernetes Service (AKS)
Azure Kubernetes Service (AKS) offers a managed container orchestrator in the Azure cloud. It is based on Kubernetes, which is a popular open-source container orchestration platform originally designed by Google.
In recent years, Kubernetes has become a de-facto orchestrator. However, while the platform provides many features and capabilities, it also comes with a steep learning curve and complex management requirements.
AKS helps organizations manage critical functionalities of containers and container-based applications—including deployment, management and scaling—without having to deal with the complexities associated with self-managed Kubernetes deployments.
AKS can handle most of the overhead involved in managing clusters, reducing the management and deployment effort. The service is designed especially for organizations interested in building scalable applications with Kubernetes and Docker on Azure.
You can create AKS clusters via the Azure portal, Azure command-line interface (CLI), or Azure PowerShell. You can leverage Azure Resource Manager templates to create template-driven deployments.
Related content: Read our blog How To Configure Persistent Volumes for Containers in AKS
Azure Container Instances (ACI)
ACI is an alternative to running virtual machines in the Azure cloud. It allows users to run both Windows and Linux containers without needing to manage or provision the underlying infrastructure, and without using Kubernetes or other container orchestrators.
ACI lets you manage containers using a graphical user interface (the Azure portal) or the Azure CLI, and works with regular Docker images.
Here are other notable features of Azure Container Instances:
- Public IP connectivity—ACI makes containers externally accessible via a public IP and FQDN (qualified domain name).
- Customization—ACI lets you define how many computing resources will be allocated to each container.
- Persistent storage—container instances are usually stateless by default. If your containers need persistent storage, you can attach a file share.
- Container groups—ACI enables deploying multiple containers as a group, in order to split a functional task between different container images. All the containers share computing, network and storage resources.
Learn more in our detailed guides to
Azure Service Fabric
Azure Service Fabric offers a distributed systems platform designed to simplify the management, packaging and deployment of containers and microservices. Service Fabric provides a wide range of features that help develop and manage cloud-native applications.
Service Fabric can help you build stateful services. It offers a programming model that lets you use any code or language to run your containerized stateful services. A Service Fabric cluster can be created anywhere, including on-premises Linux, Windows Server, and the Azure cloud, as well as other public cloud environments.
Service Fabric lets you run microservices across multiple computing resources in Azure. By defining services, you can very easily deploy applications at scale, with hundreds or even thousands of containers or applications per machine. Service Fabric makes it possible to combine services running directly in processes, and containerized services, as part of the same application.
Service Fabric offers a lightweight runtime environment, which supports both stateful and stateless microservices. It provides robust support for stateful services, through the use of containerized stateful services or through built-in Service Fabric programming models.
Azure Red Hat OpenShift
OpenShift extends Kubernetes. Running containers in production using Kubernetes demands additional resources and tools. This generally includes having to handle storage management, image registries, and logging and monitoring tools, all of which have to be tested and versioned together.
Developing container-based applications demands even more integration work with databases, CI/CD tools, frameworks and middleware. Azure Red Hat OpenShift uses all these in a single platform, enabling smooth operations for IT teams while providing applications teams with convenient tooling.
Azure Red Hat OpenShift is engineered, supported and operated by both Red Hat and Microsoft. You do not need to operate any virtual machines and no patching is needed. The master node, application nodes and infrastructure are updated, patched and monitored for you by Microsoft and Red Hat. Your Azure Red Hat OpenShift clusters are effective in your Azure subscription and are covered by your Azure charges.
You may select your own networking, storage, registry and CI/CD solutions, or you can utilize the built-in solutions for automated container and application builds, source code management, scaling, health management, deployments and the like. Azure Red Hat OpenShift offers a unified sign-on process via Azure Active Directory.
Related content: Read our blogs
Azure Container Registry
However you choose to run your containers in Azure, you can use Azure Container Registry (ACR) to store and manage your images. ACR is a private Docker registry offered as a managed service. It is based on Docker Registry 2.0, which is an open-source registry. You can use ACR to create and maintain container registries.
You can pull images from your private registry to deployment targets such as:
- Scalable orchestration systems—i.e Kubernetes, Docker Swarm and DC/OS.
- Azure services that let you build and run applications at scale—i.e. Azure Kubernetes Service (AKS), Batch, App Service and Service Fabric.
ACR also supports automatically pushing artifacts to your registry as part of a development process. It integrates with CI/CD tools like Jenkins and Azure Pipelines. It also lets you automate and orchestrate tasks like patching, building, and testing container images.
To manage Azure container registries, you can use several native tools, such as Azure Command-Line Interface, API support and Azure portal.
Azure Container Storage with Cloud Volumes ONTAP
NetApp Cloud Volumes ONTAP, the leading enterprise-grade storage management solution, delivers secure, proven storage management services on AWS, Azure and Google Cloud. Cloud Volumes ONTAP capacity can scale into the petabytes, and it supports various use cases such as file services, databases, DevOps or any other enterprise workload, with a strong set of features including high availability, data protection, storage efficiencies, Kubernetes integration, and more.
In particular, Cloud Volumes ONTAP supports Kubernetes Persistent Volume provisioning and management requirements of containerized workloads.
Learn more about how Cloud Volumes ONTAP helps to address the challenges of containerized applications in these Kubernetes Workloads with Cloud Volumes ONTAP Case Studies.