More about AWS EBS
- Amazon EBS Elastic Volumes in Cloud Volumes ONTAP
- AWS EBS Multi-Attach Volumes and Cloud Volumes ONTAP iSCSI
- AWS EBS: A Complete Guide and Five Cool Functions You Should Start Using
- AWS Snapshot Automation for Backing Up EBS Volumes
- How to Clean Up Unused AWS EBS Volumes with A Lambda Function
- Boost your AWS EBS performance with Cloud Volumes ONTAP
- Are You Getting Everything You Can from AWS EBS Volumes?: Optimizing Your Storage Usage
- EBS Pricing and Performance: A Comparison with Amazon EFS and Amazon S3
- Cloning Amazon EBS Volumes: A Solution to the AWS EBS Cloning Problem
- The Largest Block Storage Volumes the Public Cloud Has to Offer: AWS EBS, Azure Disks, and More
- Storage Tiering between AWS EBS and Amazon S3 with NetApp Cloud Volumes ONTAP
- Lowering Disaster Recovery Costs by Tiering AWS EBS Data to Amazon S3
- 3 Tips for Optimizing AWS EBS Performance
- AWS Instance Store Volumes & Backing Up Ephemeral Storage to AWS EBS
- AWS EBS and S3: Object Storage Vs. Block Storage in the AWS Cloud
Subscribe to our blog
Thanks for subscribing to the blog.
Optimizing AWS Elastic Block Store (AWS EBS) usage is not a simple task. Many factors must be taken into account to actually achieve peak performance on the AWS platform and optimum volume management with AWS EBS.
Key components to optimizing Amazon EBS performance include- I/O workload, network throughput capacity between AWS EBS and EC2 instances, and management of the AWS EBS volume itself.
In general, Amazon EBS optimized volume performance can be measured by how much money you can save without losing performance and durability. More specific measurements are -IOPS, latency, and throughput.
In this post, we will walk you through three techniques that will assist you in AWS EBS performance optimization and volume management.
1. Resizing AWS EBS Volumes
AWS EBS volumes are very flexible and elastic. What most people using AWS EBS volumes often don’t realize is that having more space than you need is a waste of money and operating system resources.
AWS EBS volumes require provisioning before they are used, and in some situations a user will provision too much or too little space for the volume. This is a situation where the Logical Volume Manager (LVM) comes to the rescue.
Logical Volume Manager or LVM, is largely used with AWS EBS volumes. It provides a clear and easy way to manage the space capability of your architecture by adding or removing AWS EBS volumes as you go.
LVM is an advanced disk management mechanism for virtualizing disk drives. It can create virtual disk partitions out of one or more physical AWS EBS volumes, allowing you to grow, shrink, or move those partitions as your requirements change. It also allows you to create larger partitions than you could achieve with a single AWS EBS volume.
How does it work?
Although this isn't a complete tutorial, the steps below will show you how to create a logical volume (LV). For the sake of simplicity, these steps will begin with the assumption that you already have an AWS EBS volume mounted at /dev/sdf attached to an EC2 instance.
Typically, you start by setting a volume as "PV" which means physical volume. To do that you can use the pvcreate command:
Then, we need to create a "VG" which means volume group. We can get that done with the following command:
vgcreate MyNewGroup /dev/sdf
Next, create the LV:
lvcreate -n MyNewGroup -L 20G MyNewGroup
Finally, format the new LV with a file system of your choice to then mount it to your EC2 instance. You're now one step closer to having your Amazon EBS optimized for higher performance.
2. Configuring AWS EBS RAID 0
As long as the operating system of your EC2 instance supports it, you can use any of the RAID configurations commonly used in traditional servers out there. The reason why that is possible is because all the configuration will be done at the software layer.
Wondering how to configure a RAID 0?
Follow this straightforward guide and you’ll be covered. But keep in mind that there are a few gotchas that you should be aware of before proceeding with RAID 0.
Let's understand when RAID 0 configuration is bad idea as opposed to when it is a good idea.
The standard RAID 0 advantage is that it provides n times higher data read and write where n is the number of EBS volumes within your RAID array.
For example: two 500 GiB AWS EBS volumes with 4,000 provisioned IOPS each will create a 1000 GiB RAID 0 array with an available bandwidth of 8,000 IOPS and 640 MBps of throughput, or a 500 GiB RAID 1 array with an available bandwidth of 4,000 IOPS and 320 MBps of throughput.
So, if you are looking for a higher level of performance than you can provision on an AWS EBS volume, RAID 0 is the way to go.
However, the gotcha is that RAID 0 configurations do not offer data redundancy: the outcome of the loss of a single volume within the array will be a complete data loss.
For those looking for durability, an alternative for RAID 0 would be RAID 1, which would address the fault tolerance issue writing to more than one volume at the same time but then sacrifice performance. RAID 1 requires more bandwidth than non-RAID configurations since data is written to multiple volumes simultaneously.
Configurations like RAID 5 and 6 may not work as well as you might initially think. These configurations can backfire on you by lowering performance and increasing costs.
AWS does not recommend RAID 5 and RAID 6 for AWS EBS since these modes have parity write operations that consume the IOPS that is available for your volumes.
In some cases these RAID 5 and 6 configurations can give you as much as 30% fewer IOPS than a RAID 0 configuration. In addition, RAID 0 configurations are significantly cheaper for better performance than expensive, multi-volume RAID 6 arrays.
3. Benchmarking AWS EBS Workloads with Fio
One of the key components of AWS EBS performance is I/O. To understand the I/O life-cycle, picture a scenario where you have an application running on a AWS EC2 instance, and that the application is submitting read and write operations to an EBS volume.
Each operation will be converted to a system call to the kernel, and along with this system call there is a buffer that carries operation data. The kernel knows that the underlying file system is a virtualized block storage, and through internal mechanisms the kernel will redirect the read/write operation to the I/O domain where the I/O operation will pass through a grant mapping process to finally, once the I/O is mapped, be performed in the EBS volume.
When you create a new EBS volume you need to provide the size and the type of the volume. As you may already know, the types AWS offers are: General Purpose SSD (gp2), Provisioned IOPS SSD (io1), Throughput Optimized HDD (st1), Cold HDD (sc1), and Magnetic.
How can you know which type suits you best?
The answer is benchmarking. Fio was created to allow benchmarking specific disk IO workloads. For example, to simulate random read operations, run the following command on your EBS volume:
fio --name=bench_fio --rw=randread --directory=/dev/sdf
- --directory indicates the filesystem where fio will run the benchmark
- --rw=randread tells fio to perform random reads
Fio is a powerful tool with an extensive API which allow the user to reproduce different scenarios. To make things easier you can configure this scenarios in a ini format file:
$ cat bech.fio
As you can see, fio will provide information about your EBS volume to assist you when the time create a new EBS volume comes. For more information about interpreting the results, see this tutorial: Inspecting disk IO performance with fio.
Another option you should use along with fio is AWS EBS metrics provided by CloudWatch. It will provide insights about your workload, like the total number of write and read operations, the waiting time to process them at a given period of time, and the throughput capacity of your volume.
Some of the interesting AWS EBS metrics that you should be using along with fio include VolumeReadBytes and VolumeWriteBytes, and VolumeReadOps and VolumeWriteOps.
VolumeReadBytes and VolumeWriteBytes provide information on the I/O operations in a specified period of time. Data is only reported to AWS CloudWatch when the volume is active. If the volume is idle, no data is reported to AWS CloudWatch.
VolumeReadOps and VolumeWriteOps show the total number of I/O operations in a specified period of time. To calculate the average I/O operations per second (IOPS) for the period, divide the total operations in the period by the number of seconds in that period.
For more on monitoring EBS volumes, check out the full list of AWS EBS metrics events here.
AWS EBS Optimized: It's Easier Than You Think
Although AWS EBS volume management and performance optimization may seem like complex tasks, this post showed you three different, very simple, and highly effective techniques that will make it a lot easier when optimizing AWS EBS usage.
Hopefully, you will be able to bring on bare metal server practices like LVM and standard RAID levels as it suits your goals and combine CloudWatch metrics and benchmarking tools like fio to understand and simulate your workload.
It's important to mention that techniques such as standard RAID levels should be carefully considered, analyzed and tested.
AWS provides solutions which, combined and properly configured, will provide data durability by replicating data across multiple servers in an Availability Zone, preventing the loss of data from the failure of any single component.
This replication makes AWS EBS volumes ten times more reliable than typical commodity disk drives; though RAID will contribute to achieving a better performance it will also introduce more points of failure to your architecture.
Amazon EBS storage is easier to deploy, less expensive, and better protected from failures and outages when using NetApp Cloud Volumes ONTAP. In the high availability configuration, which uses synchronized dual nodes in disparate locations to achieve AWS high availability, you can meet the strict RPO and RTO objectives of zero data loss and under-60-second recovery.