Subscribe to our blog
Thanks for subscribing to the blog.
July 20, 2017
Topics: Cloud Volumes ONTAP Cloud StorageStorage Efficiencies7 minute read
Enterprises opt for cloud-based solutions due to the cloud’s potential for reduced costs and enhanced agility.
“Pay-per-use” is a common phrase used to win people over to the cloud, but many users are hit with hidden fees and charges for cloud services that they were unaware they were using. The pricing matrix of public cloud providers can be extremely complex, making it difficult to correctly estimate costs.
To make sure you never wind up paying these hidden fees, it is extremely important to strategize and plan out the entire cloud migration project before you settle on a cloud solution. But if you’ve already made the move to the cloud and are paying more than you can afford right now, there are still steps you can take to fix that.
We came across a few hidden cloud costs involved in the ecosystem. Below we’ll discuss five reasons you might be paying too much for the cloud, and give you some suggestions on how you can avoid these costs in the future.
1. Migration Costs
Migration can happen either from an in-house cluster to a cloud platform or from one cloud platform to another.
In both cases moving to a new platform requires a lot of planning and strategy in order to migrate data, applications, and code. Initially, users might face bandwidth or network issues, or even issues with respect to integration of data and applications, all of which can take a significant amount of time to address.
It is the duty of the technical architect and the team to define the entire course of action before the execution starts. In the vast majority of cases, indirect costs for integration, and code and data backup are completely overlooked in the migration process, resulting in significant losses of revenue.
Even extracting data from the cloud before migrating to another provider is billable, a fact of which may enterprises are not aware.
2. Over and Underprovisioning Resources
Provisioning refers to allocation of resources by the cloud provider to the customers. A company should be able to anticipate an estimate of how many resources it will require.
Both overprovisioning and underprovisioning are costly and inefficient. Overprovisioning is a very common problem, where clusters are flooded up with servers and resources forced to be idle. All the while users keep paying for an overinflated cloud.
Underprovisioning is easily detectable as it reflects in the performance of the cluster and there is a significant increase in the latency of jobs. Distributed systems are meant for parallel processing and a balance should be maintained between the running jobs and time slots so that the cluster is not overloaded and resources not underutilized.
3. Mismanagement of Complex Cloud Deployments
Human errors can disrupt the normal operation of cloud services and such incidents end up embarrassing both the cloud vendor and their customers. These mistakes have immediate effects on a business that can be felt by everyone using the service.
Outages have happened due to human dependency and a lack of automation. AWS witnessed one such outage recently when an incorrect command entered by one employee led to the removal of a large set of servers, which caused as much as three hours of downtime for business across the Internet.
Mismanagements and disruption of services are common due to following reasons:
- Poor code deployment: Poor coding ethics and skills are often at the root of system failures that result in huge economic losses. It is highly recommended that only experienced architects or people with the correct skillset should define the entire flow.
- Configuration changes: Changes in XMLs, JSONs, or other configuration files or parameters cause cloud disruptions. They are pretty hard to detect as finding small parameter changes is not very easy.
- Improper job deployment: Many times job workflows are not properly managed in the cluster. Jobs might not be properly disturbed across time slots; they might even all trigger at the same time. Such instances should be thoroughly checked before deployment.
All such incidents happen due to a lack of automation and mismanagements of services resulting in losses of revenue. These factors also burden enterprises with additional costs due to undefined processes.
4. Free Cloud Resource Expiration
Initially, cloud providers offer a lot of services for free, but there is an upper limit to these free services. As soon as users reach that limit, the services become billable. Employees who are new to cloud infrastructure use these services on a regular basis and cross the threshold while adapting to the cloud ecosystem. Some free cloud services even come with an expiration date similar to the kind used software and as soon as it expires a payable billing cycle starts.
Various public cloud providers have thresholds for their respective services which are initially overlooked by the administrators. Azure offers free tier services for only one month, allowing users to run two small virtual machines, store 800GB of data, and run two S2 SQL databases in that period.
In the case of Google Cloud Platform, users are offered 12 months in which they get $300 credit to use services including Google App Engine, Google Cloud datastore and Google Compute Engine, and more details mentioned here. Amazon Web Services also gives users a free tier to gain hands-on experience over a span of 12 months and features services such as AWS EC2, Amazon S3, and AWS RDS.
5. Unused Resources
Often, users spin the server and leave it running all of the time: this is a very common mistake done by many cloud developers. It is essential to keep a track of the servers and jobs running on them.
While servers are spinning, the service provider’s meter keeps on ticking. If there are no jobs running, you are essentially paying for resources that are not being used to your advantage.
These costs can add up, and severely cut into the cost benefit of a cloud deployment. It is the responsibility of cloud administrators to address this problem and keep an eye on the running servers.
Elasticity of the cluster should be expanded and unused resources should be released in order to maintain balance all across the cluster. Users can cut down the servers and purchase reserved instances in case the cluster ever needs to be scaled.
Better monitoring and scheduling tools such as CloudWatch help to improve resource-use efficiency.
Strategies to Avoid Hidden Costs
Planning is key when it comes to cost reduction in the cloud. Some companies struggle when migrating to the cloud as most of the indirect costs are never accounted for, forcing budgets to be raised in order to meet the higher-than-expected costs of the project.
The best strategies to avoid hidden cost involve limiting the amount of resources that are being used, three of which are data deduplication, automating cloud processes, and minimizing idle time.
1. Data Deduplication
Deduplication is a process by which redundant data is eliminated, reducing the size of the data set. Deduplication with cloud storage reduces the storage requirements along with the amount of data that to be transferred over the network, resulting in faster and more efficient data protection operations.
With respect to data governance, massive volumes of data can be backed up and used for real-time insights. Deduplication can be categorized based on the location of deduplication, for example, as client side or source-side deduplication.
2. Automated Cloud Processes
It is important to spin the server only when required and to shut it down when there are no jobs to run. This limits the amount of billable hours.
By writing automated workflow scripts for doing all administrative tasks and operational work such as that, users can save time as well as money. It also avoids introducing costly human errors into the process, ensuring business profitability.
3. Minimize Idle Time
A lot of resources are wasted when the cloud infrastructure simply sits idle, for example, when it is being cooled or recharged. Prioritizing jobs and aligning them into proper time slots can help to minimize idle time.
Directed acyclic graph (DAG) scheduling algorithms should be used to minimize the execution time of jobs. DAGs can be used on IaaS cloud platforms, where task scheduling and resource provisioning should go hand in hand to achieve the optimal solution.
Final Notes
Data deduplication, data compression and keeping regular snapshot copies of the data helps to cut down on extra costs. Support for flash drives, superior input/output performance, and high-speed backup creation can also aid in the process of cost-reduction.
All these features are available using NetApp’s Cloud Volumes ONTAP (formerly ONTAP Cloud).
As enterprises deal with huge size of data sets, even small tweaks in the processes can account for big gains in terms of cost. NetApp’s Cloud Volumes ONTAP offers a cloud data management service that provides fast movement of data, and maximizes the rest of investment by using data deduplication, compression, and thin provisioning.
Efficient data management services help lower the hidden costs of the public cloud’s services and effectively manage the cluster.
Want to get started? Try out Cloud Volumes ONTAP today with a 30-day free trial.