March 13, 2023
Important workloads such as media libraries and home directories rely on shared file access. But configuring an enterprise-level file share can be highly complex. What kind of challenges and requirements need to be considered when it comes to cloud file sharing services?
When choosing a file service, users need to find solutions for their share’s availability, accessibility, data protection, performance, backup and archive, storage footprint and costs, scalability, agility, API and automation, migration to the cloud, data replication and synchronization, security, multicloud and hybrid capabilities, and Kubernetes integration. There are a lot of moving parts, which is one of the reasons why the cloud service providers offer fully-managed file service options.
In this post we’ll look at each one of these file share service challenges in the cloud.
- The Major Challenges to File Sharing in the Cloud
- A File Sharing Services Solution: Cloud Volumes ONTAP
The Major Challenges to File Sharing in the Cloud
1. Cloud File Share Availability
Shared file storage provides access to a vast number of users, and it needs to be available on a constant basis. But when using the major cloud offerings, configuring the file share’s availability is on the user, and that can be a nightmare. This requires complex manual configurations for supporting automatic failover and failback, especially when it comes to using NAS storage.
Many enterprise-scale file share-based workloads require strict SLAs of minimal downtime (RTO<60 seconds) and no data loss (RPO=0). In those cases, any loss of data or downtime will be too costly—in terms of lost revenue, reputation, customer churn, legal exposure, and more—to absorb.
2. Protocol Accessibility
To meet the demands of both Linux/ Unix and Windows workloads, a file share solution should enable access with both NFS and SMB/ CIFS protocols and any of those protocols’ various versions or flavors.
However, using the major cloud providers, there isn’t a single, native solution that is able to provide such multi-protocol access. Most enterprises relying on cloud file services must double up costs, running one service for SMB and another for NFS. Configuring an in-house solution that can serve both protocols can also be prohibitively expensive and time consuming.
3. Data Protection
There are several points to consider with data protection for file shares. Snapshots are key to guaranteeing point-in-time recovery points for cases where data is corrupted, infected, or accidently deleted, and they should easily and quickly be restored to an up-to-date copy. Cloud provider snapshots load lazily, which means not all the data may be ready when you need it, and the costs for creating the initial copy can be high. Another challenge is related to application-aware snapshots. The snapshot mechanism should be able to guarantee consistent recovery, for databases or any other application.
Another aspect of data protection is disaster recovery (DR). The DR solution needs to ensure reliable failover and failback processes, as well as automatic syncs to keep the secondary copy up to date, and regular testing. All this needs to be done while maintaining the copy at reasonable costs, as the DR copy is a complete copy of the primary share.
Shared file services serve important workloads that require a high, consistent performance and low levels of latency. Data no matter where it is requested must be immediately usable. It is important to have the ability to scale out or up on request, and to be able to move data between tiers non-disruptively, and without causing performance issues. In case of an uptick in usage, the file service should be able to move to a more performant tier and at a reasonable cost.
5. Backup & Archive
Preventing data loss requires a sufficient method for backing up file data. Data that may need to be kept for longer periods or compliance purposes requires an archiving solution for the files. Creating and restoring backups should not affect production-level performance. Cloud storage backups also need to be available for use at any time, consistent, and able to be restored easily. Granular restore should also be possible so that a single file can be recovered without requiring the rest of the volume or data set to be restored.
6. Storage Footprint and Costs
Since file storage is typically used to support massive data sets such as media libraries or home directories, the overall storage footprint and costs can be a considerable challenge even for the most established organizations. Huge cloud storage costs can be a detriment to further scaling or investment in new developments.
7. Scalability and Agility
Shared file storage capacity needs to be able to meet the massive data requirements that are inherent to enterprise deployments. But file storage use cases can also see sudden, dramatic increases and decreases in usage. A file service’s ability to scale both up and down to meet those demand peaks and down periods is key.
A file service also needs to be able to meet all the demands of file sharing with a dispersed workforce. In globally dispersed teams, files need to be quickly accessible worldwide, without a high level of latency. Solutions to fix this, such as downloading and reuploading files, can be impractical due their inherent inefficiency and the sync issues they can create.
8. API and Automation
File storage requires users to be able to carry out complex tasks and workflows such as managing volumes, snapshots, and clones, setting up replications, etc. via automation and orchestration tools.
9. Cloud Migration
Working with a cloud-based file service requires in many cases the ability to move file data between on-prem or other data repositories without having to refactor or re-architect your existing applications and processes (lift and shift approach) which could otherwise be cost and time consuming.
10. Data Replication and Sync
Users need to be able to replicate file shares between various repositories and keep them synced for use cases such as DR, data collaboration, offline testing, offline analytics, and more. The costs for data replication and sync, both in terms of storage and traffic costs, will need to be considered, as massive amounts of data may require to be kept up to date between repositories.
Sending sensitive data to the cloud and having it accessible by vast numbers of users requires that the data is protected with encryption, efficient key management, and role-based access restrictions. Read more about securing your file share data in the cloud here.
12. Multicloud and Hybrid
The native cloud service providers each have their own attractive offerings for file usage, but not every enterprise will be willing to completely let go of their trusted on-prem data center or go all-in with just one cloud. Managing a file share between deployments in one or more clouds and an on-prem data center can be a challenge in terms of data synchronization, management, cost control, and more.
13. Kubernetes Integration
Kubernetes is the most popular way to orchestrate container usage. However, it can be difficult to share data between clusters on Kubernetes and containers. One solution that simplifies this challenge is to deploy the containers using a single pod. That is possible when using NFS.
NFS can easily be used to attach volumes to pods, greatly reducing the hands-on tasks users would otherwise need to take care of when managing persistent storage. To do this, a file solution needs to be able to work with a persistent volume provisioner. Resizing NFS persistent volumes, mounting persistent volumes as Read/Write Many, creating separate storage classes for different mount parameters, protecting data with instant snapshots, and other requirements must also be supported.
A File Sharing Services Solution: Cloud Volumes ONTAP
To address these and other file share service challenges, NetApp users can turn to BlueXP Cloud Volumes ONTAP.
Cloud Volumes ONTAP offers solutions for all of the challenges mentioned above, including:
- Dual protocol support for SMB/ CIFS and NFS workloads—something none of the fully-managed file services in the public cloud offer
- Protective security features to keep your file data safe and data-focused ransomware protection
- Automatic backup to low-cost object storage and granular restore options via Cloud Backup
- High availability to minimize downtime and ensure data is never lost
- Cost efficiencies to reduce file storage costs
- Hybrid and multicloud operability and mobility
- File consolidation in the cloud between remote locations with BlueXP file caching
- File caching brings data closer to remote users, reducing latency and making cloud bursting easier
- More automation and management controls
To find out how this technology helps enterprises around the globe, check out these case studies of successful Cloud Volumes ONTAP file share deployments.