Azure Container Instances (ACI) is a managed service that allows you to run containers directly on the Microsoft Azure public cloud, without requiring the use of virtual machines (VMs).
With Azure Container Instances, you don’t have to provision underlying infrastructure or use higher-level services to manage containers. ACI provides basic capabilities for managing a group of containers on a host machine. It supports the use of full container orchestrators like Kubernetes for more advanced tasks like coordinated upgrades, service discovery and automated scaling.
This is part of our series of articles about Kubernetes in Azure.
In this article, you will learn:
ACI provides direct control over containers, with no need to configure cloud virtual machines (VMs) or implement container orchestration platforms like Kubernetes. Key features include:
When deploying containers at a large scale, it is common to use container orchestrators, like Kubernetes, Nomad and Docker Swarm. These tools help automate and manage the interaction between containers, and concerns like resource provisioning, networking, and storage management.
Azure container instances provide some of the basic features of container orchestrators. By design, it is not intended to be a full orchestration platform.
A full orchestration platform manages and automates tasks like scheduling containers, managing affinity, monitoring health, enabling failover, auto scaling, networking, service discovery, and application upgrades and rollbacks.
ACI uses a layered approach, performing all the management functions needed to run a single container. On top of these basic capabilities, orchestrators can manage activities related to multiple containers.
Because the container instance's infrastructure is managed by Azure, the orchestrator doesn't need to worry about finding the right host to run a single container. The elasticity of the cloud ensures hosts are always available. Instead, the orchestrator can focus on simplifying multi-container tasks such as scaling, high availability, and upgrades.
For long-term stable workloads, container scaling on a dedicated virtual machine cluster is usually cheaper than running the same containers on Azure Container Instances. However, container instances provide a good solution for rapid changes in overall capacity to cope with unexpected or short-term usage peaks.
For applications that experience sharp fluctuations in demand, you would typically scale up the number of virtual machines in the cluster and then deploy containers on those machines. ACI makes things simpler, by letting the orchestrator deploy new containers directly on Azure Container Instances, and terminate them when no longer needed.
Related content: Azure Kubernetes Service Tutorial: How to Integrate AKS with Azure Container Instances
To get started with Azure Container Instances, you need to create an Azure resource group that allows you to deploy a container instance.
Azure Cloud Shell (also known as Azure CLI) lets you use common Azure functions through a command-line utility. You can access Cloud Shell directly from the Azure portal, by clicking on the “>_” icon located in the navigation bar at the top, or installing it on a local machine.
To create a Linux container:
az container create --name helloworld --image microsoft/aci-helloworld --ip-address public -g [RESOURCE GROUP]
Azure immediately spins up a container instance.
1. Type the following in the command prompt:
az container show --name helloworld -g [RESOURCE GROUP]
2. In the resulting output, you can see if the container instance is in a “succeeded state”.
3. Visit the public IP address shown—you should see a page that says “Welcome to Azure Container Instances”
The defaults used when creating a public IP address include a single-core CPU, port 80 and 1.5Gb memory. To override these defaults, simply adjust the parameters in the initial prompt according to your preference. For a container instance with 2 CPU cores and 3Gb memory, use: az container create --name helloworld --image microsoft/aci-helloworld --cpu 2 --memory 3 --ip-address public -g [RESOURCE GROUP]
Azure CLI can generate a table listing all containers in a certain resource group, providing the following data for each:
To list running containers, use the following command:
az container list -g [RESOURCE GROUP] -o table
NetApp Cloud Volumes ONTAP, the leading enterprise-grade storage management solution, delivers secure, proven storage management services on AWS, Azure and Google Cloud. Cloud Volumes ONTAP capacity can scale into the petabytes, and it supports various use cases such as file services, databases, DevOps or any other enterprise workload, with a strong set of features including high availability, data protection, storage efficiencies, Kubernetes integration, and more.
Azure NetApp Files (ANF) is a fully managed cloud service with Azure portal integration and access via REST API and Azure SDKs. Customers can seamlessly migrate and run applications in the cloud without worrying about procuring or managing storage infrastructure.
In particular, Cloud Volumes ONTAP supports Kubernetes Persistent Volume provisioning and management requirements of containerized workloads.
Learn more about how Cloud Volumes ONTAP helps to address the challenges of containerized applications in these Kubernetes Workloads with Cloud Volumes ONTAP Case Studies.