Subscribe to our blog
Thanks for subscribing to the blog.
July 25, 2017
Topics: 3 minute read
By now most readers and customers of NetApp have heard about the Data Fabric – NetApp’s vision of data interconnectedness and portability in the Hybrid Cloud. A visual model that helps me understand and explain the concept is that of a woven mesh, with threads connecting the endpoints. Using that model, the endpoints are locations where your data sits for any length of time. And that could be for a few seconds (say, in the CPU of a cloud’s hypervisor/VM) up to years and years (HIPAA data maintained for the life of a patient).
The key idea here is that the data moves. And it moves along those interconnected threads of communication between data locations. It may not move every day, or it might move every few minutes.
And different types of data will shift at different rates – Dev/Test data updated once every few weeks, sensor data from IOT devices traveling across wired or wireless paths to repositories that change every second, backup data dumped into buckets every night…You get the picture. The traversal rates are all over the map.
So now that we’ve got the picture of the woven mesh, we should stop and think about the threads themselves – the connections over various networks across which our data moves. These are IP-based connections, usually, but they could be block-based in private networks. Their bandwidths span many orders of magnitude, from mere bits per second up to huge bonded network connections that pipe hundreds of gigabytes per second from place to place.
NetApp believes that the strength, usability and ultimate success of these data connections require two fundamental factors: a robust, efficient replication/movement engine, and a standardized data management platform.
Let’s take the first factor. For twenty years NetApp has constructed the most bulletproof, easily used and feature-rich replication technology in the biz. SnapMirror/SnapVault technologies are space-efficient (only moving changed blocks), bi-directional by design, and preserve storage efficiencies such as deduplication, compression and the like.
That’s like a ten-lane glittering titanium superhighway compared to the rutted dirt roads of low-level tools like ftp or http.
And with the added expansion of the replication technology to AltaVault and our upcoming NetApp HCI, in addition to interoperability with Cloud Volumes ONTAP (formerly ONTAP Cloud) and ONTAP Select, the construction of a hyper-efficient data-movement mesh can be accomplished quickly.
Got data in the hyperscalers, on-prem, in a co-lo and two different service providers? No problem – sync it all, no matter where it is. No full-copy transports – imagine. That’s gonna save you some money.
And then you add in the versatility and full-stack reporting and management capabilities of OnCommand Insight to bridge these disparate instances. Because you really don’t want to correlate and translate performance, efficiency and capacity reports from all those different service providers. Companies have devoted entire groups of engineers to just making spreadsheets that make sense of all that.
OnCommand Insight can span those instances, collect data, drill down to solve problems, and then present it all logically. “Yes, here’s the report you wanted, showing how we’d be better served moving these on-site workloads to this cloud provider, and pulling back these dev/test instances from our hyperscale partner after QA.”
Boom. Instant gold star.The mesh needs to be woven with super-strong threads.
At NetApp we’ve created the capabilities to move your data, not just with efficiency, consistency, and resiliency, but with rock-solid performance and management. The Data Fabric solves the problem of your present and future data flexibility. And I bet your company’s going to like that.
Want to get started? Try out Cloud Volumes ONTAP today with a 30-day free trial.