May 21, 2019
Topics: 4 minute read
I don’t think he actually said that, but I know he thought it. Jim needed to provide file services in the public cloud to his internal teams. But he didn’t know anything about the cloud and he was too young to retire.
But now, his company was moving to the cloud. The company had decided to move to the public cloud to dramatically reduce, or even eliminate, their in-house IT services. Jim didn’t know much about the file services the various clouds offered, but he did know that his applications needed consistently high performance with high availability at a reasonable cost. Jim also knew that he didn’t want to juggle too many services, which would mean too many services to understand, implement, debug, and maintain. On top of all of that, he wanted to move fast, so he wanted to avoid any and all refactoring.
Jim is a storage lead in the central IT group of a global company. I’ve had the pleasure of working with large and small companies at varying stages of development, and from what I could tell, Jim’s IT group was well run. For several years, they’d run their internal IT based on a well-defined set of services with SLAs (service level agreements) that their business units and developers easily access. Jim is in charge of storage globally, and has done an excellent job meeting the needs of his customers.
Jim First Tried Rolling His Own Cloud-Like Infrastructure
Spinning up a few compute instances, he ran Linux servers for NFS services and Windows servers for SMB services. But that wasn’t cloud. It was just more infrastructure that he needed to manage. Soon Jim found other issues. Performance was erratic. To get close to the availability he needed, Jim had to duplicate the infrastructure and make extra copies of the data in different availability zones, adding to the cost and complexity. The only form of backup was to make full copies and send them to an object store, and hope that when the time, came he’d remember which object to restore, and where it was.
Jim then tried a few file services offered by the clouds themselves. At least with those he didn’t have to run a fleet of servers. However, they didn’t perform much better. Sometimes the job completed in an hour. Sometimes it took three days. Jim couldn’t offer such an unpredictable service. The performance would frustrate his customers and lead to support calls that would drive him crazy. On top of that, to achieve the availability he needed, he still needed to copy data to other availability zones, adding cost, adding to his neverending list of things to manage. The backup was no better.
Plus, in order to provide options for file services, Jim would need to learn multiple clouds and maintain one service, or two, or three on each. The task was daunting. Jim wanted to learn the cloud, but he didn’t want to spend all of his time on file services.
Then Jim Tried NetApp Cloud Volumes Service for AWS
It dramatically simplified operations. It did what it said it would do. It provided consistent performance with options to pick the right level, and change on the fly. It had built in snapshots, replication, and a backup service that was about to become a mainstream offering. It was available on AWS, Azure, and Google Cloud.
Being the responsible type, Jim tested Cloud Volumes Service on a single workload to understand what it did, what it didn’t do, and how to optimize use. Jim then went to production in a single region in a single cloud with one application. It was nothing fancy, just an infrastructure app that only required 5TB of storage. However, that infrastructure application was critical to the revenue of the organization. If it failed, other operations failed, and their external customers were impacted.
Cloud Volumes Service passed with flying colors. He was able to get up to 460k IOPs from a single volume, if he used multiple clients, and up to 60k IOPs for any single client. That performance was consistent: any day, any time, any region. Jim has made it a standard in his catalog and is now ready to roll it out to other applications in other regions, even those operating on other clouds.
Jim Now Has 50% More Time
Cloud Volumes Service has cut the time that Jim spent running storage infrastructure by 50%. There’s no more need to manage the servers, apply patches, optimize their performance, or scale them. Compared to other cloud file services, he’s been able to cut his costs in simply because he no longer needs to make duplicate copies for to satisfy availability requirements—and he can simply adjust performance tiers to provide the precise amount of throughput required.
Most importantly, Jim has become a cloud hero. He enabled his organization to continue their migration to public clouds, while maintaining an efficient and professional operation with easy to understand cloud file services.
Want to Be a Cloud Hero Like Jim?
Read more about Cloud Volumes Service for AWS and sign up for a trial. Or check out our cloud vs. on-premises cost comparison.