More about Kubernetes in Azure
- Kubernetes in Azure: How It Works and 7 Service Options
- AKS Backup Walkthrough: How to Perform Backup & Restores in AKS
- Azure Containers: Top 4 Options for Running Containers on Azure
- GKE vs AKS: Compared on Availability, Security, Tooling & More
- Azure Container Instances vs Azure Kubernetes Service (AKS): How to Choose
- Azure Container Instance (ACI): The Basics and a Quick Tutorial
Subscribe to our blog
Thanks for subscribing to the blog.
What is Azure Container Instances (ACI)?
What is Azure Kubernetes Service (AKS)?
Azure Container Instances (ACI) offers an easy way to run containers in the Azure cloud, eliminating the need to manage virtual machines (VMs) or using more complex container orchestration services.
ACI is based on a serverless model (like the comparable AWS service, Amazon Fargate). It starts containers in the Azure cloud in seconds. It is ideal for simple container-based workloads like smaller-scale apps, build jobs, and task automation.
While ACI does not require the use of Kubernetes or other orchestrators, it does support them, and can be used together with plain Kubernetes or Azure Kubernetes Service.
Learn more in our in-depth guide to Azure Container Instances (ACI)
Azure Kubernetes Service (AKS) simplifies the deployment of managed Kubernetes in Azure.
AKS handles most of the complexity and operational tasks related to managing Kubernetes—including tasks like health monitoring, upgrades, and networking. AKS manages Kubernetes master nodes, while customers manage and maintain agent nodes.
AKS is a free managed service. Customers are only required to pay for agent nodes used by the clusters. There is no need to pay for any of the masters, which are configured and deployed by AKS.
In this article, you will learn:
- Azure Container Instances vs AKS
- When to Use Azure Container Instances
- When to use Azure Kubernetes Service
- Azure Kubernetes Storage with Cloud Volumes ONTAP
Azure Container Instances vs AKS
ACI bills you for the time each container group runs. Container groups represent a certain number of vCPUs and memory resources that can be used by one or more containers.
For example, for Linux VMs running in the Central US region, the price per second is $0.0000015 per GB and $0.0000135 per vCPU. So, to a container group using 10 vCPUs and 100 GB of RAM, you will pay $0.009 per hour for memory resources and $0.0081 per hour for vCPUs, or a total of $0.0171 per hour.
AKS manages your hosted Kubernetes environment at no cost, and only bills for VMs that run your worker nodes, as well as storage and networking resources used by your clusters. The costs will be the same as running the same VMs without AKS. To estimate costs, you need to determine the type of VMs you will run in your clusters, the number of nodes needed and the duration they will run. See up to date pricing for Azure VMs.
ACI scales using container groups—a collection of containers running on the same host. Containers in a container group share lifecycles, resources, local networks, and storage volumes. This is similar to a Kubernetes pod.
AKS leverages the scaling capabilities within Kubernetes. You can scale your AKS pods manually or use horizontal pod autoscaling (HPA) to automatically scale and adjust the number of pods in your deployment based on CPU utilization or other selected metrics.
ACI offers access to Azure Virtual Networks, which provide private and secure networking for Azure resources as well as on-premises workloads. ACI lets you deploy container groups into Virtual Networks, which provides secure communication between ACI containers and:
- Other container groups in the same subnet
- Databases in the same Virtual Network
- On-prem resources accessed via VPN gateway or ExpressRoute
AKS lets you enjoy all the security features of native Kubernetes, with Azure capabilities like network security groups and orchestrated cluster upgrades. Regularly updating software components is critical for security—AKS automatically ensures your clusters are running the latest version of operating systems and Kubernetes, including security patches. AKS also secures access to sensitive credentials and pod traffic.
When to Use Azure Container Instances
ACI is very useful for scenarios that require separate containers without orchestration—such as simple applications, task automation, and CI/CD pipelines. Prefer to use ACI if you need one or more of the following capabilities:
- Ease of use—easy to run and manage containers without the complexity of Kubernetes.
- Fast container start—ACI containers start in seconds.
- Custom sizing—ability to define exactly how much RAM and CPU resources containers will be able to access.
- Persistent storage—ability to persist state by directly mounting Azure file shares on containers, without the complexity of Kubernetes Persistent Volumes (PV).
- No need for full orchestration—ACI is useful for scenarios that do not require capabilities like service discovery, coordinated upgrades, or autoscaling. Note that if you do need these capabilities, you can use ACI in combination with AKS or another orchestrator.
When to use Azure Kubernetes Service
If one or more of the following features are important for you, prefer AKS to ACI, or use AKS in addition to ACI.
Existing security groups and Azure AD
If you are already using Azure Active Directory (Azure AD), you can set up AKS clusters to integrate with it and reuse existing group memberships and identities.
Integrated logging and monitoring
AKS offers built-in monitoring. Azure Monitor for containers helps you gain visibility into the performance of your clusters. A self-hosted Kubernetes installation, or ACI without Kubernetes, requires a manual installation and configuration of a monitoring solution.
Node and pod scaling
Scaling containerized environments can be complex. To help make the process simpler and easier, AKS offers two auto cluster scaling options—the horizontal pod autoscaler (HPA) and the cluster autoscaler. The horizontal pod autoscaler tracks the demand on pods and adds pods to meet demand. The cluster autoscaler tracks pods that cannot be scheduled due to node constraints, and automatically scales cluster nodes, allowing more pods to be added.
Cluster node upgrades
AKS is responsible for managing Kubernetes software upgrades as well as the process of cordoning nodes off and draining them to minimize disruption to applications that are currently running. When done, the nodes are upgraded—one after the other. This helps significantly reduce cluster management overhead.
Enabling GPU hardware acceleration
AKS lets you run GPU-enabled Kubernetes nodes. This capability ensures you can run graphic-intensive or machine learning workloads on AKS.
Storage volume support
AKS supports both dynamic and static storage volumes. You can attach and reattach pods to storage volumes as they are created or rescheduled. This lets you run stateful applications that need persistent storage.
Ingress with HTTP application routing support
This option lets you easily provide access to AKS cluster deployed applications and make these applications publicly available.
Azure Kubernetes Storage with Cloud Volumes ONTAP
NetApp Cloud Volumes ONTAP, the leading enterprise-grade storage management solution, delivers secure, proven storage management services on AWS, Azure and Google Cloud. Cloud Volumes ONTAP capacity can scale into the petabytes, and it supports various use cases such as file services, databases, DevOps or any other enterprise workload, with a strong set of features including high availability, data protection, storage efficiencies, Kubernetes integration, and more.
In particular, Cloud Volumes ONTAP supports Kubernetes Persistent Volume provisioning and management requirements of containerized workloads.
Learn more about how Cloud Volumes ONTAP helps to address the challenges of containerized applications in these Kubernetes Workloads with Cloud Volumes ONTAP Case Studies.