Containers are an effective way for developers to deploy and package their applications. They are lightweight and offer a portable, consistent software environment for applications to scale and run anywhere. Some of the common use cases for containers include deploying and building microservices, carrying out batch jobs for machine learning applications, and transferring existing applications into the cloud.
Containers go hand in hand with cloud infrastructure. Amazon Web Services (AWS) provides strong support for containers, with three dedicated cloud services that can help you run containerized applications: Amazon Fargate, Elastic Container Service (ECS), and Elastic Kubernetes Service (EKS), which lets you run Kubernetes clusters on AWS.
In this article:
AWS provides several features that make it easier and more secure to run containerized applications in the cloud. These include:
AWS provides 210 compliance, governance and security services and core features that are approximately 40 times greater than the other biggest cloud providers. AWS offers security isolation between your containers, giving you the capability to establish granular access permissions for each container and ensuring you are implementing the newest security updates.
AWS containers services are run on the top global infrastructure with 69 Availability Zones (AZs) over 22 Regions. AWS offers more than twice the number of regions with multiple AZs than the next biggest cloud providers. There are SLAs for all AWS container services (EKS, Fargate, and ECS).
AWS container services provide several services you may use to run containers:
AWS container services remain integrated with AWS. This lets your container application make use of the depth and breadth of the AWS cloud, in relation to security, networking, monitoring and more. AWS has the security and elasticity of the cloud as well as the agility of containers.
ECS is a fast and highly scalable management service for containers that you may use to run, manage and stop containers on a cluster. Your containers are defined in a task definition, which lets you run tasks within a service or individual tasks. A service is a configuration that lets you maintain and run a certain number of tasks at the same time, within a cluster.
You may run your services and tasks on a serverless infrastructure managed by AWS Fargate. If you require more control over your infrastructure, you could run your services as well as your tasks on a cluster of Amazon EC2 instances that you oversee.
Amazon ECS allows you to launch and cease your container-based applications through API calls. You can also gain insight into the condition of your cluster from a centralized service and leverage a variety of familiar Amazon EC2 features.
You may schedule how your containers are placed across your cluster, according to your isolation policies, availability requirements and resource needs. With Amazon ECS, you don’t need to operate your own configuration management and cluster management systems or deal with scaling your management infrastructure.
Amazon ECS could be utilized to develop a consistent deployment and build experience, to build intricate application architectures in a microservices model, and to scale and manage batch and Extract-Transform-Load (ETL) workloads.
The AWS container services team makes available a public roadmap on GitHub. This roadmap outlines what teams are working on and lets AWS customers provide feedback.
Learn more in our detailed guide to AWS ECS
AWS Fargate works in tandem with Amazon ECS, in order to run containers and you don’t have to manage clusters or servers of Amazon EC2 instances. When you use Fargate, you don’t need to configure, scale or provision clusters of virtual machines when running containers. Thus, you don’t need to select server types, optimize cluster packing or consider when you need to scale your clusters.
Organizations can run their Amazon ECS services and tasks with a Fargate capacity provider or Fargate launch type. They package their application in containers and specify the memory and CPU requirements, define IAM policies and networking, and activate the application. Every Fargate task employs its isolation boundary and doesn’t share the underlying CPU resources, kernel, elastic network interface, or memory resources with a different task.
The following diagram outlines the architecture of an Amazon ECS environment run on AWS Fargate.
A managed service, which you can utilize to run Kubernetes on AWS and you don’t have to operate, maintain or install your own Kubernetes nodes or control plane. Kubernetes is an open-source system employed to automate the management, deployment and scaling of containerized applications.
Amazon EKS key capabilities:
For each cluster, Amazon EKS carries out a single-tenant Kubernetes control plane. The control plane infrastructure is not available across AWS or cluster accounts. The control plane is made up of three etcd instances and over two API server instances that are run over three Availability Zones in a given Region.
Amazon EKS controls:
Amazon EKS makes use of Amazon VPC network policies to limit traffic via control plane components to within one cluster. Control plane components in relation to a single cluster cannot receive or view communication from another cluster or another AWS account, aside from when permitted by Kubernetes RBAC policies. This highly available and secure configuration ensures that Amazon EKS is reliable and suitable for production workloads.
Learn more in our detailed guide to AWS EKS architecture
NetApp Cloud Volumes ONTAP, the leading enterprise-grade storage management solution, delivers secure, proven storage management services on AWS, Azure and Google Cloud. Cloud Volumes ONTAP capacity can scale into the petabytes, and it supports various use cases such as file services, databases, DevOps or any other enterprise workload, with a strong set of features including high availability, data protection, storage efficiencies, Kubernetes integration, and more.
In particular, Cloud Volumes ONTAP supports Kubernetes Persistent Volume provisioning and management requirements of containerized workloads.
Learn more about how Cloud Volumes ONTAP helps to address the challenges of containerized applications in these Kubernetes Workloads with Cloud Volumes ONTAP Case Studies.