Ansible enables you to automate cloud deployments. You can use Ansible to manage applications and services using automation playbooks. Each playbook defines a set of configurations, which is used consistently across cloud environments.
In this post, we’ll explain how to use Ansible modules with AWS, and quickly walk you through the process of automating Ansible playbook deployments with Amazon EC2 and GitHub. We’ll also explain how NetApp Cloud Volumes ONTAP can help simplify storage when using Infrastructure as Code in AWS.
In this article, you will learn:
Ansible is an open source tool that you can use to automate your AWS deployments. You can use it to define, deploy, and manage applications and services using automation playbooks. These playbooks enable you to define configurations once and deploy those configurations consistently across environments.
Safe automation
Another benefit of using Ansible is ensuring safe automation. Misconfigurations are a major vulnerability in cloud environments, but automation can help you ensure that only permitted configurations are deployed. However, you don’t want everyone on your team to be able to automatically deploy anything they want.
To prevent this, Ansible offers the Ansible Tower. Ansible Tower is a web-based UI that you can use to define role-based access controls (RBAC), monitor deployments, and audit events. It enables you to set and authorize user actions on a granular level. Ansible Tower also includes features for encrypting credentials and data.
Ansible modules supporting AWS
When using Ansible, there are dozens of modules you can choose from that support AWS services. These modules include functionality for:
Autoscaling groups
For additional ways to automate AWS infrastructure, see our article about deploying Terraform on AWS.
There are a few modules, used in most Ansible deployments, that you should know how to use. The most common of these modules are introduced below with some instructions to get you started.
To use authentication with AWS-related modules, you need to specify your access and secret keys as either module arguments or environmental (ENV) variables.
To store as module arguments:
This method involves storing your keys in a vars_file. This file should be encrypted with ansible-vault for security.
---
ec2_access_key: "--REMOVED--"
ec2_secret_key: "--REMOVED--"
Keep in mind, if you store keys as arguments, you need to reference your arguments for each service. You can see an example of this below:
- ec2
aws_access_key:"{{ec2_access_key}}"
aws_secret_key: "{{ec2_secret_key}}"
image: "..."
To store as environment variables:
export AWS_ACCESS_KEY_ID='AK123'
export AWS_SECRET_ACCESS_KEY='abc123'
After your hosts are provisioned, you need to establish communications. You can do this manually, but it creates significantly more work. A better alternative is to use the EC2 Dynamic Inventory script.
This script enables you to dynamically select hosts regardless of where hosts were created. It then automatically maps your hosts according to your inventory script.
Tags, groups, and variables
After using the Dynamic Inventory script above, hosts are also automatically grouped according to how hosts are tagged in EC2.
For example, a host tagged with a class of ‘webserver’ can be automatically discovered with the following command:
- hosts: tag_class_webserver
tasks:
- ping
You can leverage this functionality to group systems by function and simplify management. You can also enhance this functionality by including ‘group_vars’. These are variables in Ansible that you can assign as subcategories of classes.
To autoscale your resources, you can either use the built-in Amazon autoscaling features or you can use Ansible modules. These modules can configure your autoscaling policies and grant finer control.
One module you can use is ansible-pull. Pull is a command-line tool that you can use to fetch and run playbooks. To apply this to autoscaling you can create images with a built-in ansible-pull invocation. Then, when a host comes online, it will automatically pull your autoscaling playbook. This eliminates the need to wait until the next Ansible command cycle occurs.
CloudFormation is a native Amazon service that you can use to define your cloud resource stack as a JSON document. It can provide essentially the same functionality as Ansible but it has a much steeper learning curve. Because of this, it is often easier to use Ansible modules.
In some cases, however, users may want to use both CloudFormation and Ansible. There are also modules for this. For example, modules that can be used to abstract the application of CloudFormation templates. These modules can enable you to use Ansible to build images and then launch those images with CloudFormation.
In the below walkthrough, you’ll learn how to automate an Ansible playbook deployment using EC2 and GitHub. This is a good way to get familiar with how Ansible interacts with AWS services like EC2. However, before you get started, you should be familiar with both AWS and Ansible separately. This walkthrough was adapted from a longer tutorial which you can view here.
Prerequisites
Before you get started, make sure that you have the following prerequisites:
To begin, you need to configure your Ansible deployment to use GitHub webhooks. This requires setting up processing for webhooks on your EC2 instance. To do this, you need to route requests to an Express server using NGINX as a reverse proxy.
Please also make sure you have installed Amazon’s Extras Package for Linux.
Use SSH to access your EC2 instance:
amazon-linux-extras install epel
Update your packages:
yum update -y
Install Ansible, NGINX, and Git:
yum install ansible -y
yum install nginx -y
yum install git -y
Once webhooks are enabled, you need to prepare Node.js and your Express server.
Install Node.js:
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.34.0/install.sh | bash
. ~/.nvm/nvm.sh
nvm install node
Now, you need to select a location for your Express server. The example below, creates a directory called server.
mkdir server && cd server
npm install express
With Node.js installed and your server ready, you can create a JavaScript file with functions to handle your webhook requests. It should use ansible-pull to pull and run your playbook.yml file from your GitHub repository. The server is configured to listen on port 8080; you need to define the same port in the Express server code and NGINX configuration.
See the Javascript file in the full tutorial, step 2. Specify the GitHub repository and user where your playbooks are stored. This is required by the ansible-pull command. Replace <GitHubUser>, <repo-name>, and <playbook> with your own details.
exec("ansible-pull -U git@github.com:<GitHubUser>/<repo-name>.git <playbook>.yml")
Finally, run the Express server:
node app.js
With your server running, you are ready to set up your deployment key. You will use this deployment key later in the procedure.
Create an SSH key on your instance, name it ssh, and then run the following command:
eval "$(ssh-agent -s)"
You should get an output similar to:
>Agent pid 1111
Next, you need to set your NGINX configuration to listen on port 80. Then, you can route traffic to the port that your Express server listens to. For details see the full tutorial.
To start NGINX, use the following commands:
systemctl start nginx
systemctl enable nginx
Finally, you are ready to configure your webhooks on GitHub.
Log into your GitHub account, and navigate to Settings > Deploy Keys.
Click Add deploy key.
Go to the .ssh directory where your public key is stored and open the id_rsa.pub file. Then, copy its contents. It should look something like the following:
ssh-rsa <your_email@example.com>
Navigate to Webhooks in the Settings menu and click Add webhook.
Copy the public IP address for your EC2 instance into the Payload URL section.
Check the Response section to verify that your Express server received the request.
NetApp Cloud Volumes ONTAP, the leading enterprise-grade storage management solution, delivers secure, proven storage management services on AWS, Azure and Google Cloud. Cloud Volumes ONTAP supports up to a capacity of 368TB, and supports various use cases such as file services, databases, DevOps or any other enterprise workload, with a strong set of features including high availability, data.
Cloud Manager is completely API driven and is highly geared towards automating cloud operations. Cloud Volumes ONTAP and Cloud Manager deployment through infrastructure- as- code automation helps to address the DevOps challenges faced by organizations when it comes to configuring enterprise cloud storage solutions. When implementing infrastructure as code, Cloud Volumes ONTAP and Cloud Manager go hand in hand with Terraform to achieve the level of efficiency expected in large scale cloud storage deployment in AWS.