BlueXP Blog

The Zero Trust Model and How It Affects Data Management

Written by Semion Mazor, Product Evangelist | Jun 27, 2022 1:06:18 PM

The zero trust security model assumes that all traffic within a corporate environment is hostile until proven otherwise. You can think of it as whitelisting taken to the extreme. At its core, zero trust is designed to allow for digital transformation through a number of steps meant to protect modern environments and monitor and secure an organization’s data, both of which make it especially relevant to ransomware protection.

Here, we’ll have a look at the tight relationship between zero trust and data management and discuss how NetApp Ransomware Protection helps organizations uphold zero trust architecture.

In this article, we explore:

What Is the Zero Trust Model?

The zero trust model is an approach to cybersecurity that applies ongoing validation to data and systems access based on the assumption that the source of the request is, until proven otherwise, untrustworthy. The zero trust model is a response to the rise of distributed, API-based applications based on cloud-native architectures such as container or function-based microservices.

The goal with zero trust security was to change how organizations trust transactions and overall organization resources behind the traditional network-based approach, enabling a better understanding and validation of data and information flows at any given point in time.

The zero trust model eventually evolved to become the Zero Trust eXtended (ZTX) Framework. In this updated approach, zero trust security shifted from a “data-centric” model to one that was focused on “people-centric perimeters.” The new ZTX framework takes things a step further by requiring verification from all devices, people, and workloads that request access to data at any time, even if they’re already part of the network.

To establish a successful zero trust model, there are five steps to follow:

  1. Identify sensitive data: Divide your sensitive data into three classes: public, internal, and confidential. Doing this will allow you to establish groups of data that represent their own microperimeter.
  2. Map the flow of sensitive data: Observe how data moves in your network in order to optimize its movement and create micronetworks.
  3. Architect zero trust microperimeters: After your data is identified and you understand its flow, establish a micro network around each one while considering the best form of security to use.
  4. Use security analytics to monitor your zero trust ecosystem: Data analytics and logs can help you pinpoint malicious behavior in all your microperimeters.
  5. Embrace security orchestration and automation: Apply automation tools and create policies that will help reduce the operational overhead of manual response tasks.

John Kindervag, the analyst who is widely considered to be the person that came up with the zero trust model, recently participated in a podcast in which he stressed that the zero trust model is not binary (trusted versus untrusted) but rather a continuous assessment of how confident we are that a system is performing in a secure manner. As such, he outlined four design principles for a zero trust framework:

  1. It must always be aligned with the business and its desired business outcomes.
  2. It must be designed from the inside out, with the first question being, “what are you trying to protect?”
  3. It must control access and grant privileges on a granular, need-to-know basis.
  4. It must inspect traffic at the application level (layer 7) and enforce layer 7 controls based on data packet contents.

The Role of Data Management in Zero Trust

In all of its permutations, risk-based data management has always been at the core of the zero trust security model. Here are some of the key ways that data management aligns with zero trust:

  • Visibility: You cannot secure what you don’t know exists. A data management solution automatically and continuously discovers and maintains an inventory of data resources across the corporate environment.
  • Classification: Not all data is created equal. A data management solution uses metadata and other tagging techniques to classify data resources according to their criticality and sensitivity.
  • Granular Usage Policies: A data management solution tracks who owns a data resource, which users (human or device) or workloads need access to a data resource, and what actions a given user or workload can carry out on the data (read-only, read-write, modify, delete, etc.).
  • Data Loss Protection: A data management solution manages backups, replications, archiving, encryption, and other data loss protection methods.

Zero trust implemented at the data layer verifies that a user is indeed who or what they assert themselves to be and then enforces data usage policies when granting privileges. The zero trust framework also continuously inspects traffic at the packet level and monitors activity to ensure that the system is behaving securely, as expected. Anomalous traffic or behavior triggers automated data protection workflows that either prevent or mitigate identified threats.  

How to Deploy a Zero Trust Architecture

Once again we turn to zero trust expert John Kindervag for the most up-to-date guidelines on how to deploy a zero trust architecture. These are his five deployment steps:

  1. Understand what you’re protecting: Identify the DAAS (data, apps, assets, services) elements to be protected and then put each element, one at a time, into a single protect surface with its own micro-perimeter controls and filters.
  2. Understand how the system works: Map transaction flows in order to build a baseline of normative interaction among users, resources, applications, services, and workloads.
  3. Build an environment-agnostic zero trust architecture: This should work across private/public clouds, on-prem data centers, SD-WANs, endpoints, SaaS applications, and so on. Each protect surface defined during Step 1 will require its own unique zero trust architecture.
  4. Create policy: At this point, you are ready to define the policies that determine who can access which protect surfaces, when (and for how long), where, why, and how. The policy engine becomes an integral part of the zero trust architecture.
  5. Monitor and maintain: Using correlated logs, machine learning, AI, and other advanced data analytics methods, turn telemetrics into real-time insights into system and data security. During this step, it’s important to integrate threat intelligence, SIEM, and intrusion protection systems into your zero trust technology stack.

As with any new strategic initiative, you should start small and expand incrementally as you gain experience. Your zero trust architecture should start with small and low-risk protect surfaces. As you add more and higher-risk protect surfaces, you can optimize and harden the zero trust architecture until you’re ready to include mission-critical systems and resources.

NetApp Ransomware Protection and Zero Trust

NetApp now offers a way to coordinate all your zero trust data efforts with its Ransomware Protection offering. Ransomware Protection collects all of your NetApp zero trust-compliant capabilities that protect an organization’s entire data estate against ransomware and other data theft exploits in one place.

With a layered defense approach and NetApp’s Ransomware Protection, users can elevate an organization’s data security posture across complex multicloud and hybrid environments.

Key features and benefits include:

  • Automatically categorizes and locates sensitive data.
  • Efficient and logically air-gapped data replication using NetApp Snapshot technology for backup and disaster recovery across multiple accounts and regions, with granular recovery points.
  • Automatically learns normal usage and storage behavior as a baseline to detect anomalies through User and Entity Behavior Analytics (UEBA).
  • For ONTAP 9.10.1 and later users, built-in on-box ML looks at volume workload activity plus data entropy to automatically detect ransomware.
  • Triggers alerts if user, data, or storage anomalies are detected, and automatically initiates a NetApp Snapshot recovery point for near-zero RPO/RTO should recovery be required.
  • Blocks malicious files and user accounts.
  • Avoids costly downtime and ensures rapid recovery through:
    • File-level forensics to identify which files to restore.
    • Analysis of the blast radius of the attack.
    • Instant data restoration from a specific point in time to a user-specified location, either full volumes or individual files.

Learn more about NetApp’s Ransomware Protection and strengthen your zero trust efforts.

FAQs

What is the concept of the zero trust model?

The underlying concept of the zero trust model is that no traffic—whether external or internal, lateral or ingress—can be trusted. The zero trust model assumes that the corporate network’s outer defense layers have already been breached and additional layers must be implemented in order to safeguard network-connected assets. In recognition of the fact that the threat landscape is highly dynamic, the Zero Trust model promotes granular security guardrails that adapt automatically to ever-changing conditions and contexts.

What are the core principles of the zero trust model?

The core principles of the zero trust model are:

  • Every request to access a system must be verified and validated, regardless of the requester’s IP address, purported identity, or type (human, device, service, etc.).
  • If access privileges are granted, they must be the absolute minimum required to carry out the request.
  • All traffic must be monitored continuously at the data packet level in order to quickly detect and block anomalous content and behavior.

What are the 3 stages of the zero trust security model?

The 3 main stages of implementing a mature zero trust security model are:

  • Mapping and visualization: Discover and categorize all resources according to their risk profile. Map which entities interface with those resources and for what purpose.
  • Establish security policies: Establish automated and orchestrated security policies that promote real-time risk mitigation and, should a breach take place, limit its blast radius. The key principles should be granular segmentation, least privilege, and ongoing behavioral analytics.
  • Extend and optimize: Start with the most critical resources and continue incrementally until the entire IT infrastructure is protected, then leverage both forensics and performance metrics to continuously optimize security policies.