AWS big data refers to the collection, storage, and use of big data in AWS. It is supported by a range of services and capabilities, including analytics, highly scalable storage, and wide support for compliance regulations.
This is part of an extensive series of guides about managed services.
In this article, you will learn:
AWS’s most impressive support for big data implementations comes in the form of analytics solutions. The provider offers a variety of services that you can use to automate data analysis, manipulate datasets, and derive insights.
Kinesis is a service that enables you to collect and analyze real-time data streams. Supported streams include Internet of things (IoT) telemetry data, website clickstreams, and application logs. You can export data from Kinesis to a variety of AWS services, including Redshift, Lambda, Elastic MapReduce (Amazon EMR), and S3 storage.
You can also use Kinesis to build custom applications for streaming data using the Kinesis Client Library (KCL). This library provides support for dynamic content, alert generation, and real-time dashboards.
EMR is a framework for distributed computing that you can use to process and store data. It is based on Apache Hadoop and clustered EC2 instances. Hadoop is a well-established framework for big data processing and analysis.
When you implement EMR, it provisions, manages, and maintains your infrastructure for Hadoop, enabling you to focus on analytics. EMR supports the most commonly used Hadoop tools, including Spark, Pig, and Hive.
Glue is a service that enables you to process data and perform extract, transform, and load (ETL) operations. You can use it to clean, enrich, catalog, and transfer data between your data stores. Glue is a serverless service meaning you are only charged for the resources you consume, and you do not have to worry about provisioning infrastructure.
Amazon ML is a service that provides support for developing machine learning models without ML expertise. It includes wizards, visualization tools, and pre-built models to get you started. The service can walk you through evaluating data for training and optimizing your trained model to fit business needs. Once complete, you can access your model’s output through batch exports or API.
Redshift is a fully-managed data warehouse service that you can use for business intelligence analytics. It is optimized for large data queries of structured and semi-structured data using SQL. Query results are saved to S3 data lake storage and can be ingested by a variety of analytics services, including SageMaker, Athena, and EMR.
Redshift also includes a feature called Spectrum that you can use to query data in S3 without performing ETL processes. This feature evaluates your data storage and requirements for the query and optimizes the process to minimize the amount of S3 data to be read. This helps minimize costs and speeds query times.
QuickSight is a service for business analytics that you can use to perform ad-hoc data analysis and build visualizations. You can use it to ingest numerous data sources, including from on-premises databases, exported Excel or CSV files, and AWS services, including S3, RDS, and Redshift.
QuickSight uses a “super-fast, parallel, in-memory calculation engine” (SPICE). This engine is based on columnar storage and uses machine code generation to produce interactive queries. When you perform queries, the engine persists the data until it is manually deleted by the user to ensure that subsequent queries are as fast as possible.
AWS offers numerous solutions to help you address your entire big data management cycle. These tools and technologies make it possible and cost effective to collect, store, and analyze your data sets. The tools available support the big data cycle from collection to consumption.
Collection solutions focus on helping you accumulate your raw data, structured and unstructured. Solutions can integrate natively with AWS services or ingest data gathered from exports.
In AWS, big data collection is supported by services and capabilities that include:
Storing big data requires highly scalable solutions that can handle data before and after processing. These solutions are accessible to a variety of processing and analytics services and can typically be tiered to help you reduce storage costs.
In AWS, big data storage is supported by the following services:
Processing and analysis solutions enable you to transform raw data into data consumable for analytics. This generally involves sorting, aggregating, and joining data but can also involve applying new data schemas or translating data into different formats.
In AWS, processing and analysis are supported by a range of services including:
Some big data projects on AWS use traditional databases for data processing. Read our related content on AWS MySQL, AWS Oracle, and SQL Server in AWS.
Consumption and visualization solutions help you derive and share insights from your data. These solutions enable you to explore your datasets and analysis and highlight those that are relevant or provide the most accurate predictions or recommendations.
In AWS, consumption and visualization of big data is supported by:
NetApp Cloud Volumes ONTAP, the leading enterprise-grade storage management solution, delivers secure, proven storage management services on AWS, Azure and Google Cloud. Cloud Volumes ONTAP supports up to a capacity of 368TB, and supports various use cases such as file services, databases, DevOps or any other enterprise workload, with a strong set of features including high availability, data protection, storage efficiencies, Kubernetes integration, and more.
In particular, Cloud Volumes ONTAP helps in addressing database workloads challenges in the cloud, and filling the gap between your cloud-based database capabilities and the public cloud resources it runs on.
Cloud Volumes ONTAP supports advanced features for managing SAN storage in the cloud, catering for NoSQL database systems, as well as NFS shares that can be accessed directly from cloud big data analytics clusters.
In addition, the built-in storage efficiency features have a direct impact on costs for NoSQL in cloud deployments. The data protection and flexibility provided by features such as snapshots and data cloning give NoSQL database administrators and big data engineers the power to manage large volumes of data effectively.
A data lake is a flexible, cost effective data store that can hold very large quantities of structured and unstructured data. It allows organizations to store data in its original form, and perform search and analytics, transforming the data as needed on an ad hoc basis. Learn how AWS data lake solutions automate the entire data lake process, from data ingestion to analysis, using Data Lake Formation, Glue, Lambda, EMR, and more.
Read more: AWS Data Lake: End-to-End Workflow in the Cloud
Big data solutions help organizations to efficiently store, catalogue, search, and analyze their data. AWS offers a wide range of services, each offering different capabilities. This article introduces common AWS Data Analytics offerings, and provides assessment questions.
Read more: AWS Data Analytics: Choosing the Best Option for You
AWS ElastiCache for Redis is the fully managed service for Redis, the open-source database and cache technology fast growing in importance in enterprise DevOps deployments. Using AWS ElastiCache for Redis, engineers can easily manage all aspects of their Redis clusters being deployed on AWS, reducing operational costs for key tasks such as monitoring, maintenance, backing up data, recovering from failures, and updating software.In this blog we take a closer look at Redis, AWS ElastiCache for Redis, and how they can be used as critical parts of AWS database deployment, with a full step-by-step walkthrough to help you get started.
Read AWS ElastiCache for Redis: How to Use the AWS Redis Service here.
MongoDB is a NoSQL database that can be a key enabler for AWS big data workloads. But with two different deployment options to choose from—either the managed service from AWS that supports MongoDB (Amazon DocumentDB) or self-managing your MongoDB database built on native AWS EC2 compute instances—users may need guidance on choosing which is the best way to run MongoDB on AWS.
This post compares Amazon DocumentDB managed service with the self-managed, EC2-based MongoDB deployment option, and shows how Cloud Volumes ONTAP, the data management platform from NetApp can bridge the gap and enhance MongoDB on AWS deployments.
Read more in MongoDB on AWS: Managed Service vs. Self-Managed
This article gives a firsthand account of using Elasticsearch in production on AWS, giving insight into five important lessons that it’s important to know if you’re just getting started with the fully managed Elasticsearch service on AWS, Amazon Elasticsearch. Find out the expectations and reality in terms of operational and management overhead effort and the unique extra features the AWS managed service has compared with the open-source version, and how costs and performance stack up.
Read more in Elasticsearch in Production: 5 Things I Learned While Using the Popular Analytics Engine in AWS
Apache Cassandra started as a way for Facebook to search inboxes, but it’s grown into an open-source, scalable NoSQL database that is highly performant and highly available. How will it affect your AWS big data workloads?
A big part of answering that question is deciding which deployment option you’ll choose using: the managed service for Cassandra on AWS, Amazon Keyspaces, or deploying your own Cassandra database using AWS-native EC2 instances. This article will show you the pros and cons of each approach and how Cloud Volumes ONTAP can help.
Read more in Cassandra on AWS Deployment Options: Managed Service or Self-Managed?
AWS Snowball, one of the tools within the AWS Snow Family, is a data migration, edge computing, and edge storage device. Learn how the AWS Snowball Family works, current options for Snowball devices, how the data import and export process works, and essential best practices.
Read more in AWS Snowball Family: Options, Process, and Best Practices
AWS Snowmobile is an Exabyte-scale data transfer service designed to move extremely large amounts of data to the AWS cloud. A Snowmobile is a 45-foot-long shipping container that is pulled by a semi-trailer truck. Each Snowmobile can transfer up to 100 petabytes
Read more: AWS Snowmobile: Migrate Data With World’s Biggest Hard Disk
AWS Snowball Edge is a data transfer and edge computing device managed by the AWS Snowball service, offering both compute and on-board storage. Learn how the AWS Snowball Edge device can help you ship large amounts of data to the cloud and perform compute operations in edge locations with no connectivity.
Read more: AWS Snowball Edge: Data Shipping and Compute at the Edge
AWS Snowball offers secure, rugged devices that enable you to use AWS storage and computing capabilities in edge environments. AWS Snowmobile is a vehicle that lets you perform exabyte-scale data migration. Understand the differences and which data migration option is best for you.
Read more: AWS Snowball vs Snowmobile: Data Migration Options Compared
Together with our content partners, we have authored in-depth guides on several other topics that can also be useful as you explore the world of managed services.
Authored by NetApp
Authored by NetApp
Authored by Lumigo