hamburger icon close icon
Kubernetes in Azure

Azure Container Instance (ACI): The Basics and a Quick Tutorial

Read Next:

What is Azure Container Instance?

Azure Container Instances (ACI) is a managed service that allows you to run containers directly on the Microsoft Azure public cloud, without requiring the use of virtual machines (VMs).

With Azure Container Instances, you don’t have to provision underlying infrastructure or use higher-level services to manage containers. ACI provides basic capabilities for managing a group of containers on a host machine. It supports the use of full container orchestrators like Kubernetes for more advanced tasks like coordinated upgrades, service discovery and automated scaling.

This is part of our series of articles about Kubernetes in Azure.

In this article, you will learn:

Azure Container Instance Features

ACI provides direct control over containers, with no need to configure cloud virtual machines (VMs) or implement container orchestration platforms like Kubernetes. Key features include:

  • Support for both Linux and Windows containers
  • Ability to launch new containers through the Azure portal or command line interface (CLI)—underlying compute resources are automatically configured and scaled
  • Support for standard Docker images and the use of public container registries, such as Docker Hub, as well as Azure Container Registry
  • Ability to provide access to containers over Internet using a fully qualified domain name and IP address
  • Ability to specify the number of CPU cores and memory required for container instances
  • Support for persistent storage by mounting Azure file shares to the container
  • Defining groups that organize multiple containers that share the same host, storage, and networking resources. This is similar to the concept of a pod in Kubernetes.

Azure Container Instances and Container Orchestrators

When deploying containers at a large scale, it is common to use container orchestrators, like Kubernetes, Nomad and Docker Swarm. These tools help automate and manage the interaction between containers, and concerns like resource provisioning, networking, and storage management.

Azure container instances provide some of the basic features of container orchestrators. By design, it is not intended to be a full orchestration platform.


Traditional orchestrators vs. ACI orchestration

A full orchestration platform manages and automates tasks like scheduling containers, managing affinity, monitoring health, enabling failover, auto scaling, networking, service discovery, and application upgrades and rollbacks.

ACI uses a layered approach, performing all the management functions needed to run a single container. On top of these basic capabilities, orchestrators can manage activities related to multiple containers.

Because the container instance's infrastructure is managed by Azure, the orchestrator doesn't need to worry about finding the right host to run a single container. The elasticity of the cloud ensures hosts are always available. Instead, the orchestrator can focus on simplifying multi-container tasks such as scaling, high availability, and upgrades.


Use cases for ACI vs. traditional orchestrators

For long-term stable workloads, container scaling on a dedicated virtual machine cluster is usually cheaper than running the same containers on Azure Container Instances. However, container instances provide a good solution for rapid changes in overall capacity to cope with unexpected or short-term usage peaks.

For applications that experience sharp fluctuations in demand, you would typically scale up the number of virtual machines in the cluster and then deploy containers on those machines. ACI makes things simpler, by letting the orchestrator deploy new containers directly on Azure Container Instances, and terminate them when no longer needed.

Related content: Azure Kubernetes Service Tutorial: How to Integrate AKS with Azure Container Instances

Tutorial: Azure Container Instance Quickstart

To get started with Azure Container Instances, you need to create an Azure resource group that allows you to deploy a container instance.


Azure CLI

Azure Cloud Shell (also known as Azure CLI) lets you use common Azure functions through a command-line utility. You can access Cloud Shell directly from the Azure portal, by clicking on the “>_” icon located in the navigation bar at the top, or installing it on a local machine.

To create a Linux container:

  1. Open Azure CLI
  2. Select a name, public IP address and resource group in which your container will run
  3. Type the following command:
az container create --name helloworld --image microsoft/aci-helloworld --ip-address public -g [RESOURCE GROUP]

Azure immediately spins up a container instance.


To check the status of a new container instance:

1. Type the following in the command prompt:

az container show --name helloworld -g [RESOURCE GROUP]

2. In the resulting output, you can see if the container instance is in a “succeeded state”.

3. Visit the public IP address shown—you should see a page that says “Welcome to Azure Container Instances”


To override defaults for a container instance:

The defaults used when creating a public IP address include a single-core CPU, port 80 and 1.5Gb memory. To override these defaults, simply adjust the parameters in the initial prompt according to your preference. For a container instance with 2 CPU cores and 3Gb memory, use:  az container create --name helloworld --image microsoft/aci-helloworld --cpu 2 --memory 3 --ip-address public -g [RESOURCE GROUP]

To list running containers: 

Azure CLI can generate a table listing all containers in a certain resource group, providing the following data for each:

  • Name
  • Image (if assigned)
  • State
  • IP address/port
  • CPU/memory
  • OS type
  • Region

To list running containers, use the following command:

az container list -g [RESOURCE GROUP] -o table

Azure Container Storage with Cloud Volumes

NetApp Cloud Volumes ONTAP, the leading enterprise-grade storage management solution, delivers secure, proven storage management services on AWS, Azure and Google Cloud. Cloud Volumes ONTAP capacity can scale into the petabytes, and it supports various use cases such as file services, databases, DevOps or any other enterprise workload, with a strong set of features including high availability, data protection, storage efficiencies, Kubernetes integration, and more.

Azure NetApp Files (ANF) is a fully managed cloud service with Azure portal integration and access via REST API and Azure SDKs. Customers can seamlessly migrate and run applications in the cloud without worrying about procuring or managing storage infrastructure.

In particular, Cloud Volumes ONTAP supports Kubernetes Persistent Volume provisioning and management requirements of containerized workloads.

Learn more about how Cloud Volumes ONTAP helps to address the challenges of containerized applications in these Kubernetes Workloads with Cloud Volumes ONTAP Case Studies.

New call-to-action

Yifat Perry, Technical Content Manager

Technical Content Manager