hamburger icon close icon
Kubernetes Storage

Fundamentals of Securing Kubernetes Clusters in the Cloud

Kubernetes, by design, is meant to be a multi-tenant system where a cluster can have several users who share resources and access rights. For this reason, a compromise in a single cluster component or service can lead to compromises in the overall framework. On account of its foundational intricacies, securing Kubernetes deployments requires a collective adoption of security best practices, tools, and principles. Additionally, approaches toward securing a Kubernetes ecosystem differ from securing a traditional framework.

In this article, we discuss the fundamentals of Kubernetes security, challenges in administering comprehensive security, and best practices for securing Kubernetes workloads and Kubernetes storage in the cloud.

Jump down to a section in this article:


Kubernetes Cloud Security Fundamentals

Although Kubernetes offers innate security features, they often aren’t enough. As a dynamic ecosystem composed of multiple objects and deployment environments, securing Kubernetes clusters requires a multi-pronged approach. This is because a single security platform is considered rudimentary to address threats across all layers of the framework.

4Cs of Cloud Security for Kubernetes Workloads

Cloud-native security follows a Defense-in-Depth security model, which implements security across multiple layers (including network, cluster, and code) and dimensions (including namespaces, usernames, and runtime class names) of an organization's tech stack. In a Kubernetes ecosystem, this relates to the following four layers:

  1. Code Security

    Also commonly known as the standard application security, this refers to the collective approach of testing, scanning, and auditing practices to ensure the source code is secure. Some of these practices include:

    • Vulnerability scanning - Consistent testing of the source code and third-party dependencies to ensure they are free of security flaws
    • Version control - Enforces tracking of code modifications while ensuring containers use the latest code versions with security patches and updates
    • Static code analysis - Helps teams identify flawed configurations and security issues before they are deployed into containers
  2. Container Security

    As containers package application source code and related dependencies to run workloads on a Kubernetes cluster, configuration conflicts and vulnerabilities within container runtime engines (CREs) and images have the potential to compromise the security of an entire cluster.

    Considerations for container security include:

    • Process isolation - Using runtime classes to allow multiple containerized processes in separate configurations
    • Image signing - Ensure images run from trusted sources and host secure content by signing legitimate container images
    • OS dependency security - Scanning container images for known vulnerabilities to ensure the packages and dependencies don’t contain security flaws
  3. Cluster Security

    Kubernetes runs applications in groups of machines (nodes) known as clusters. Production-grade cluster nodes often reside in multiple environments, sometimes in a mix of cloud and on-premises deployments. In such distributed instances, securing a cluster remains an intricate affair as it relies on the comprehensive security of cluster services, networking, and components.

    Considerations for cluster security include:

    • API security - Securing access of API-driven communication using robust authentication and role-based access controls
    • Secrets management - Creation of secrets independently of the pods that consume them, storing them in etcd, and enforcing encryption at rest
    • Controlling workload capabilities at runtime - Use of policies, resource quotas, and limit ranges to control the actions of objects within the cluster
  4. Cloud Security

    This refers to securing the underlying infrastructure that hosts Kubernetes clusters. Although cloud security is partially handled by the operator that manages the platform, it’s the responsibility of cluster administrators to refer to respective platform documentation to configure the platform for regulatory compliance, security best practices, and automated threat mitigation.

    Best practices of respective cluster security documentation can also be found here:

    Cloud provider

    Security documentation

    Amazon Web Services

    https://aws.amazon.com/security/

    Google Cloud Platform

    https://cloud.google.com/security/

    IBM Cloud

    https://www.ibm.com/cloud/security

    Microsoft Azure

    https://docs.microsoft.com/en-us/azure/security/azure-security

    VMWare VSphere

    https://www.vmware.com/security/hardening-guides.html

How Kubernetes Administers Cluster Security

Administering cluster security is a collaborative undertaking that relies on shared responsibilities between developers, cluster administrators, and security teams. Kubernetes also ships with several innate functions for the simplification of securing workloads. These include:

Network policies
Network policies help secure cluster access by defining and enforcing traffic rules between cluster endpoints and pods. These policies specify which named ports, port numbers, or protocols can be used to direct the flow of traffic and are typically applied to pods using labels and selectors.

Kubernetes secrets
Secrets store base64 encodings of sensitive data such as passwords, tokens, and encryption keys, which can be injected into containers without being exposed in pod specifications. Kubernetes service accounts also simplify access management by automatically generating secret values used to access the Kubernetes API securely.

Role-based access controls (RBAC)
Through RoleBinding to restrict permissions within a namespace and ClusterRoleBinding to define access privileges for the entire cluster, Kubernetes natively offers a built-in RBAC framework for defining security permissions at the resource level. Role-based access controls ensure that containers and cluster users can only access the resources they need to perform their intended functions.

API authentication
To associate HTTP requests made to the API server with the requested origin, Kubernetes supports different authentication mechanisms such as ABAC, RBAC, and Webhooks. The authentication process is typically initiated by identifying the source of the request through attributes such as Username, User-ID (UID), and User Groups. Once the source has been identified (user account or service account) Kubernetes uses authenticating proxy, bearer tokens, or client certificates to authorize the requesting service to access the API server.

TLS-based Ingress
All traffic into a cluster is encrypted with a TLS protocol by default. The Kubernetes Ingress object allows for the creation and distribution of TLS certificates to all cluster resources. The Ingress controller also defines HTTP(S) rules to allow external access of services securely.

Challenges in Securing Kubernetes Clusters in the Cloud

Apart from the distinct approaches to deploying Kubernetes clusters, organizations may choose different options for configuring component parameters of a deployment. In these instances, embracing a unified security policy on different variations of deployments is mostly imperfect. Aside from this, there are various challenges that affect the deployment of secure clusters in the cloud.

Insecure images and image registries

Images are the building blocks of containers in a cloud-native ecosystem. Although downloading images from public repositories is a popular approach for rapid deployment, identifying them for vulnerabilities and configuration conflicts within the cluster is a common challenge. On the other hand, defining governance policies for choosing only trusted registries of secure images raises overhead by requiring a continuous deliberation of the changing threat landscape and cross-functional collaboration.

Lack of security visibility

Securing an environment is largely dependent on consolidated cluster observability. Since Kubernetes clusters typically involve numerous containers scheduled on different, highly distributed nodes, achieving edge-to-edge visibility is a universal challenge. This impedes mechanisms to track, monitor, and manage workloads in real time. Clusters that operate in a hybrid environment rely on different observability tools, further obscuring the chances of implementing a unified monitoring and logging platform.

Compliance issues

As Kubernetes doesn’t offer the deployment of compliance standards out of the box, implementing compliance controls adds an additional layer of complexity to cluster management. In most cases, maintaining compliance requires the adoption of continuously changing industry standards, security benchmarks, and data retention policies. Regulatory compliance and audits also rely on component-level logging and continuous monitoring to support compliance checks and automated policy enforcement.

Insecure default controls

While Kubernetes implements various features to speed up application delivery, it doesn’t provide secure configurations by default. For instance, Kubernetes doesn’t inject network policies into pods out of the box, which implies there are no communication restrictions between containers at deployment time. With Kubernetes’ insecure defaults and overly permissive configuration settings, cluster administrators are required to undertake a thorough analysis of misconfiguration flaws and inherent vulnerabilities before deploying clusters to the cloud.

Unsecure data at rest

Kubernetes doesn’t offer native features to secure data either at rest or in transit. As a result, sensitive data remains vulnerable if the cluster were to be compromised in a cyber attack. Although encrypting data is recommended as a typical solution to prevent malicious exploits, the approaches to administering encryption often require complex analysis and are effort intensive. Securing Kubernetes persistent volumes also requires crucial considerations on data replication and recovery to ensure node failures don’t amount to perpetual data loss.

Best Practices for Securing Kubernetes Clusters in the Cloud

A Kubernetes ecosystem requires special considerations, tools, and best practices for comprehensive security configuration and management.

Using a VPN to secure cloud-based nodes

It’s recommended to run cloud-based cluster nodes in private networks and to restrict them from open access to the internet. This can be done by using a Virtual Private Network (VPN), which isolates cluster resources by limiting Secure Shell (SSH) access to nodes, subsequently reducing the chances of an external entity accessing isolated cluster resources.

Monitor network traffic to enforce visibility

Kubernetes workloads are continuously communicating and use cluster networks extensively. Monitoring network traffic and comparing it with enforced network policies helps cluster administrators identify malicious activities. Network policies also enforce the provisioning of network segmentation to restrict default access of node ports and containers, which subsequently ensures that a compromised workload doesn’t impact the security of neighboring workloads.

Enforce audit logging for all cluster traffic

As a recommended practice, audit logging should be enabled for all cluster events to identify unwanted requests to the API, especially access failures. When passing files to the API server, this can be achieved by defining events to be logged by using the -audit-policy-file flag. Auditing log entries help identify all instances of forbidden traffic and correlate those with further access to cluster resources. Although logging mostly helps achieve a reactive approach in mitigating real-time threats, archived logs help with comprehensive threat modeling and regulatory compliance.

Secure access to the kubelet service

Each node is supported by a kubelet service that exposes an API to enable the launching of pods, pod restarts, and metric reporting among other critical functions. In instances where the exposed API is compromised, hackers can run malicious code within the cluster, compromising the entire deployment framework. To prevent the kubelet service from being exploited, use configurations that secure the kubelet from malicious attacks. Some of such configurations include:

  • Disabling anonymous access
  • Enforcing node restrictions on the API server to limit kubelet permissions
  • Closing all read-only ports
  • Turning off cAdvisor to avoid unnecessarily exposing information about nodes

Use resource quotas and limit ranges to control container privileges

Resource quotas and limits are used as constraints to restrict the average resource consumption per container, pod, or namespace. With these limits in place, Kubernetes administrators can limit the number of resources consumed by a process, thereby eliminating the risk of a Denial-of-Service (DoS) attack that occurs when a single container restricts other workloads from accessing required resources.

Configure namespaces and network policies for resource isolation

Namespaces enforce a logical partitioning capability that enables different entities to work independently of each other over shared cluster infrastructure. To enable secure isolation, namespaces are recommended to segment cluster resources with defined limits, quotas, and connection policies for pods within each namespace.

Reinforce policies with admission controllers

Admission controllers are Kubernetes plugins that act as gatekeepers to intercept incoming requests and alter them to fit authentication requirements or deny them. Admission controllers harden cluster security by defining and reinforcing a security baseline across the cluster. Pod admission controllers are recommended to restrict running privileged containers and define secure access modes for the filesystem.

Use a third-party tool to secure the cluster

Manual assessment and administration of security controls over a distributed Kubernetes ecosystem is typically impossible. To reduce operational overheads, cluster administrators should adopt third-party tools that help with automated vulnerability scanning of cluster services. These tools help with a number of different security measures, including:

  • Implementing robust authentication controls
  • Adhering to security frameworks
  • Processing whitelisting/blacklisting
  • Identifying exposed services
  • Automating restriction of malicious requests

Encrypt Persistent Volumes

Binding containers with Kubernetes PVCs instead of PVs not only allows users to request stored data across multiple clusters but also prevents data loss due to volume binding failures. Storage volumes should also be encrypted and access-controlled to prevent attackers from exploiting sensitive data at rest. It’s also recommended to implement network segmentation policies that override default pod communications for minimizing the blast radius of a successful attack and limiting data exfiltration from inter-connected pods.

Conclusion

Kubernetes continues to be the default container orchestrator for deploying and orchestrating containerized applications due to its flexibility and scalability. While the platform offers in-built services for administering security, hardening cloud-based clusters is a complicated undertaking.

Securing a Kubernetes ecosystem mostly starts with a diligent analysis of architectural flaws and platform dependencies, building pipeline integrity, and unrestricted exposure of core services. Beyond the initial assessment, robust runtime protection remains dependent on a collective adoption of best practices, tools, and security controls.

NetApp BlueXP Cloud Volumes ONTAP is a cloud-based, data management platform that provides users with the ability to manage persistent storage while addressing data security and protection complexities at the hypervisor level. The platform offers numerous features including the ability to create and manage volumes, snapshots, and clones, as well as support for data encryption, deduplication, and compression.

Learn more about Data Protection for Persistent Storage in Kubernetes Workloads with Cloud Volumes ONTAP and read here about our Cloud Volumes ONTAP with Kubernetes: Success Stories.

New call-to-action
Sudip Sengupta, Technical Consultant

Technical Consultant