The huge volumes of data that businesses can produce are growing at a phenomenal rate.
This data can be a business's greatest asset but it can also be its greatest liability if not managed with care.
The consequences of mismanaging this data can range from loss of intellectual property to the possibility of fines and other legal action.
Losing an IP may impact the future of the business itself, while exposing a company to legal action that can seriously hinder operations.
IT departments are often caught between a rock and a hard place in this sense: they need to be able to control access to the data, ensure that they comply with the relevant regulations, and prevent data from leaking outside of the organization.
Such tasks are made even more complex if the organization is global, as each country has its own regulations with which the company must comply.
At the same time, the end users want to be able to collaborate in a seamless manner and be able to access the data from any location at any time. If they don’t have better option, they will find their own ways to work around the constraints of their day-to- day jobs.
There are many cloud-based collaboration tools which make shadow data sharing within the business a real threat. The end users may not be aware that their actions are exposing the business to risk, but they are: there may not be any data classification, policies, or data loss prevention controls in place when they employ these solutions.
To overcome these obstacles, there must be an efficient enterprise cloud collaboration strategy in place.
This article will show you how to successfully create that kind of strategy in order to avoid the pitfalls mentioned above. Using a three-phase approach, you will understand the key steps to allow your users to be able to collaborate in an efficient, frictionless manner.
There are two key steps in this phase:
It is easy to make assumptions about how the business works when creating the cloud collaboration strategy. If the strategy is created in isolation it will most likely be product-driven and aligned with how the IT department thinks the business wants to work.
The strategy needs to be co-created with the business. By involving the business and tapping into their insights the strategy will be more closely aligned to what the company actually needs.
When it is time to implement, there will be less resistance to the changes as the business will understand what you are delivering and will feel as if they it had been part of journey to produce the strategy.
Understanding your constraints when developing the strategy means reviewing the regulations that are applicable to your organization, and then defining the controls you’ll implement to ensure compliance.
These controls will typically include data classification, identifying data owners, and enacting data protection methods.
Data Classification can be an extremely complex and time-consuming process to implement. To break it down into steps, the key points are:
The IT department cannot be responsible for all data within the business; owners of the data must be identified as they will be responsible for authorizing access to the data, responsible for its accuracy, integrity and lifecycle management of the data.
These can be implemented from access level controls to encryption, depending on the data classification. By having the data classified it allows the IT department to identify where extra layers of protection have to be deployed. These additional controls can range from encryption at rest by utilizing capabilities such as NetApp Storage Encryption (NSE) or AWS S3 server-side encryption and encryption in transit by using TLS or AWS client-side encryption.
Keep in mind that backing up your data effectively is also important to make sure recovery will be possible in case of any disaster. Once you have documented your strategy and gained approval from the business stakeholders you are now ready to implement the strategy.
This is the most exciting and challenging part for the project. It can seem like a daunting and never-ending task but by delivering incremental value through the process the business will quickly start the see benefits.
Once you have defined your data classifications you need to implement them. This is typically done via tools or by a manual process.
The end state is that the data has a data classification, this is typically done via a protective marking on the data: for example, adding a header or watermark to the data’s metadata.
Each approach has several challenges and in the majority of cases an organization will end up using both methods.
Data classification will never be perfect there will always be a margin of error, but by at least having the data classified the risk of data being shared incorrectly would be reduced when compared to having no data classification process in place.
The data classification will feed into the Data Loss Prevention (DLP) controls that can now be deployed.
Data loss prevention tools allow the detection and prevention of data breaches and unauthorised sharing of data.
There are a few different approaches to deploying DLP within your organization, especially if you have deployed Office 365 (O365) or Goggles G Suite Enterprise Edition, which are outlined in the table below.
Options |
Advantages |
Disadvantages |
Native O365 Capabilities |
|
|
Native G Suite Capabilities |
|
|
Third-Party DLP Tools |
|
|
CASB |
|
|
Having defined methods for the business to be able to collaborate will reduce the risk of them trying to find their own solution.
There a number of different approaches that can be implemented: internal web-based collaboration, cloud-based collaboration, and network-optimized file sharing. Each have their own advantages and disadvantages.
As organizations move to a hybrid cloud model they will also need to be able to replicate data into the cloud for collaboration.
Cloud vendors provide some native methods of being able to do this but they are often optimized for moving data into the cloud. For example, AWS Storage Gateway provides an on-premises file server that integrates into S3.
Storage vendors are also extending their capabilities into the cloud. This approach can simplify the operational management of the storage platforms as they use the same management tools to manage on-premises and cloud based storage.
They also provide additional capabilities that are not available natively from the cloud provider; this can provide powerful capabilities around how the business can derive value from the data that they generate.
As an example, NetApp Cloud Sync can be used to optimize the transfer of data into S3, data which can then be consumed by QuickSight to provide business intelligence reports for the company.
Once the data is in the cloud there is a requirement to secure the data. AWS provides a number of approaches depending on an organization's specific compliance requirements: server-side encryption with AWS S3 Managed Keys (SSE-S3), server-side encryption with AWS KMS-Managed Keys (SSE-KMS), and server-side encryption with customer-provided keys (SSE-C).
Learn more: The Ultimate AWS Encryption Guide
Typically, organizations will use either SSE-S3 or SSE-KMS unless they have stringent compliance rules that require them to utilize SSE-C. If an organization has very sensitive data that requires separation, they may also apply additional encryption independent of AWS.
There are numerous vendors on the AWS Marketplace who provide encryption capabilities that can be applied on top of the services AWS provide.
Do you want to avoid dissatisfaction, frustration, technical problems, and general chaos?
If so, training the users in the systems and planning the user adoption process together is a necessary step. People are resistant to change: it’s your job to bring them along with you on the journey. Otherwise, they won’t buy into the new solutions.
The user adoption process has a number of key steps to it. Before you make any changes, the journey has to start with a clear communication plan. This plan will help keep the business informed as you move forward with the adoption. No one will feel shut out as changes start to be implemented, if they know what is coming up ahead.
Next, the new systems have to be tested with key users. This will discover any problems in a safe way before they can cause harm. Once the system tests well, phased implementation can begin. At this level, it’s important when possible to avoid big bang cut overs. Also, find champions within the company who will evangelize the adoption on your part. These thought leaders will help you as you being to show the benefits of the new system moving forward.
As issues crop up, and they undoubtedly will, you need to be able to address them promptly and effectively. You don’t want to risk losing support this late into the process. Finally, getting feedback and following up on it to adjust along the way is an ongoing final step.
What does successful information sharing look like?
Success is when the business can collaborate in a seamless and frictionless manner, one where users can concentrate on deriving benefits from the data rather than worrying about how they are going to access it or share that information with colleagues at different sites.
The collaboration tools should become transparent to the user, if you achieve this, then you know your efforts have been successful. Even if you work in a highly regulated and secure environment with the right level of planning you can still design and implement a collaboration strategy that brings value to the business and simplifies the end user experience.
What will failure look like?
Imagine shadow data sharing using cloud-based services in an unofficial, unmonitored manner, employing any number of different collaboration tools and islands of data.
This alternative will not only prevent gaining the full benefits from the data, it poses a security threat to IT department, and to the health of the company itself. Failure looks like putting far too much at risk.
Want to get started? Try out Cloud Volumes ONTAP today with a 30-day free trial.