Subscribe to our blog
Thanks for subscribing to the blog.
Regulations such as the Health Insurance Portability and Accountability Act (HIPAA) in the US and the EU’s General Data Protection Regulation (GDPR) are changing the way that businesses run at every level, including the storage systems in use to house data. To make sure that your system is compliant only goes so far— there are always costs to keep in mind.
In this article we will summarize both GDPR and HIPAA, showing what parts of these regulations fall into the remit of the storage administrator, and sketch out some guidelines for architecting storage solutions that can offload cold data to cloud storage while staying compliant with GDPR and HIPAA regulations.
These storage designs will ensure that your storage is compliant with both GDPR and HIPAA regulations and that your NetApp All Flash FAS (AFF) or SSD-backed FAS systems are cost-efficient using the Cloud Tiering service.
Data Responsibilities under GDPR
The EU’s General Data Protection Regulation, or GDPR, is part of EU law covering privacy and protection of the personal data of each EU and EEA citizen. GDPR defines whether personal data should be collected, how much can be collected, how the data should be stored, and many other rules protecting it to provide individuals greater level of control over their personal and sensitive information.
We will only be looking at how GDPR applies to storage, which for us means:
- Personal data must be stored securely.
- Only authorized users should access personal data.
- A person has the right to request the erasure of their data, which must be actioned within 30 days.
There are two tiers of fines for non-compliance and data breach and both are considerable. The first has fines of up to 2% of annual global turnover or €10 million, whichever is higher, for failure to comply with the articles that regard children’s consent, processing without identification, and certification. Fines as high as €20 million or 4% of annual global turnover, whichever is higher, apply to violations of GDPR’s articles on data processing, unlawful processing, consensual sharing, processing special kinds of data, data subjects’ rights, and transferring data to third parties.
Data Responsibilities Under HIPAA
HIPAA or the Health Insurance Portability and Accountability Act is a United States legislation that was established to modernize the flow of healthcare information, to stipulate how personally identifiable information (PII) should be maintained in order to be protected from fraud and theft, and to address limitations on healthcare insurance coverage.
The HIPAA legislation is composed of a number of rules, such as the HIPAA Privacy Rule and the HIPAA Security Rule, each of which lays out different requirements for safeguarding protected health information (PHI).
Though HIPAA has a wide scope, it is specific about how data is stored and moved. For storage administrators that means:
- A careful level of control and monitoring has to be applied when granting access to software and hardware containing PHI.
- Ensuring that only authorized individuals are allowed to have access to the PHI data repository.
- Protecting the systems where PHI data is stored from intrusion.
- Encrypting all PHI in transit between repositories.
Considerations of Compliant Storage Architecture
So how do we ensure data is stored in compliance with both HIPAA and GDPR? As storage administrators, the easiest way to do that is to make design decisions based around the regulations. Here’s an example: GDPR provides a person with the right to have their data erased, and any request to be erased must be actioned within 30 days. By ensuring backups are never kept longer than 29 days, or shorter, depending on the time required to process the erasure request, you can make sure that you’ll always be compliant.
Both HIPAA and GDPR similarly state that your storage systems should be secured so that data is only accessible to authorized personnel and that it must be stored securely. Therefore, design your storage systems so that administrative access is verified via user-based authentication, to ensure proper logging. Care must also be taken that only authorized servers or desktops can access the data on the storage system—that means NAS shares or SAN volumes must be secured and the storage system defaults for new shares might require modifications to enhance security.
If your network is connected externally, either to the internet or private network, firewalls should be used to protect your internal network and storage. Also, if data is sent outside the company’s network it must be encrypted (i.e., adhering to NIST standards in HIPAA). It can be easier to manage storage security when it sits on a different network (Separate VLAN) and servers have a second network interface onto this network, this can also be useful for ensuring performant storage connectivity.
Finally, a person can request a copy of their data, that is stored on your system, and that request must be fulfilled. Hence you will need to ensure that you can supply a complete copy of their data.
NetApp Cloud Tiering with Compliance
Let's start with how on-premises NetApp AFF and SSD-backed FAS systems make it easy to design with compliance in mind. A large portion of the configuration is policy-based, which can be configured to best practices using:
- Role-Based Access Control to ensure that only authorized employees can administer the storage.
- A separate Storage Virtual Machine (SVM), which is effectively a secure separate entity, could be used for storing data with personal information. This would have different management and data network interfaces, ensuring access is restricted to just a small purpose-built network.
- NetApp support of NFS v4 can restrict the NFS shares themselves as well as SMB shares to user-based access control, rather than just restricting by IP, and mandate kerberos-based user authentication, providing a higher level of security.
NetApp storage systems have unique data protection features to protect data, such as the ability to create NetApp Snapshot™ copies of the data at various stages during processing, in case you lose or corrupt the data and need to start again. As well as to replicate them to create a secondary copy for DR and/or backup purposes. Without a clean snapshot or secondary copy, restoring your system will cost time, money, or both.
In data processing, the bottleneck is normally storage, it can never be as fast as the CPU requires, so you need high-performance storage, which is why you have NetApp All Flash FAS systems. But high-performance SSD storage (SAS or NVMe based), is not cheap, and the total required capacity could exceed your budget.
Using NetApp Cloud Tiering service for AFF or SSD-backed FAS systems can reduce those costs by moving data that you are not using, to cloud storage on Amazon S3, Google Cloud Storage, or Azure Blob. "Cold" data is automatically moved to an object storage bucket in your preferred vendor's cloud when required. And this is all accomplished without any application refactoring, and client machines are completely unaware of the data movement.
Data that is in use frequently (hot data) is kept on the high-performance on-premises AFF or SSD-backed FAS system, and performance is not affected.
In this solution;
- Data stored in volumes can be encrypted using NetApp Volumes Encryption (NVE), a software-based technology for encrypting data at rest. An encryption key accessible only to the storage system ensures that volume data cannot be read if the underlying storage is repurposed, returned, misplaced, or stolen.
- NetApp ONTAP data management software running on top of AFF and SSD-backed FAS system ensures data transfers to the cloud and back (with or without VPN) are protected in transit. Data moved between local and cloud tiers is encrypted by using TLS 1.2 using AES-256-GCM.
- On the cloud tier all data encrypted by NVE remain encrypted when moved to the cloud tier. Data that was not encrypted is encrypted server-side using AES-256-GCM encryption with encryption keys owned by the respective object store.
- Cold data is encapsulated and transferred to the cloud tier as objects; however, when you access data that has been moved to the cloud, only the required blocks are moved back to your on-premises NetApp system, which provides good performance and minimum egress charges.
- The associated storage costs are reduced since the high-performance SSD storage is used for storing in-use data, and less-frequently used data is moved to the cloud to a cheaper object storage.
In a storage world where mistakes can be very costly, storage architecture that deals with personal data must be designed to provide the required protection by default.
Building in compliance does not need to be complex or unduly expensive. NetApp All Flash FAS and SSD-backed FAS systems simplify the design of compliant systems and mitigate the costs with Cloud Tiering.
Stay compliant, increase your high performance capacity, and reduce on-prem costs: Try out Cloud Tiering today.