hamburger icon close icon
Google Cloud Storage

Google Cloud Storage Encryption: Key Management in Google Cloud

Google Cloud Storage encryption is not just one size fits all. Different encryption options will benefit some users more than others.

Client-side encryption is when a client encrypts data locally using their own encryption keys. When data is stored in Google Cloud Storage, it is encrypted a second time using one of Google’s Server-Side Encryption (SSE) mechanisms. Client-side encryption gives the most control over encryption, but puts the burden of managing the keys, audit trails, and rotations with the client. One way to avoid this in Google Cloud Storage deployments is to use one of the platform’s SSE options.

In this article we’ll show you how to encrypt data using Google Cloud Storage and customer-supplied encryption keys. This can be advantageous for deployments in NetApp Cloud Volumes ONTAP for Google Cloud.

Google Cloud Storage SSE Methods

SSE Google Cloud Storage encryption options include:

  • Google-Managed Encryption Keys (GMEK): This is the default mode where everything is transparent to the client and only GCP IAM permissions are needed to access the bucket and its objects.
  • Customer-Supplied Encryption Keys (CSEK): The customer creates the encryption keys by their own means and these keys are supplied to the Google server to encrypt data before standard Google Storage Encryption is applied. Google only holds the key in memory for the duration of the operation. If the client loses the key, the data cannot be decrypted.
  • Customer-Managed Encryption Keys (CMEK): The customer creates the encryption keys using Google Key Management Service (KMS) and these keys are used by the Google server to encrypt data before standard Google Storage Encryption is applied.

GMEK is the simplest SSE and sufficient for many storage requirements. However, for sensitive data, CSEK and CMEK can provide additional security and thwart certain attack vectors. With CMEK, a client needs both IAM permissions to access bucket contents and KMS permissions to get a key. It is a form of two-factor authentication where both factors are “in-band” with respect to GCP.

The advantage to CMEK is that auditing, rotation, and storage of the keys are all done by Google. The disadvantage is that if an attacker gains enough GCP permissions, they can read the data in the CMEK encrypted bucket. With CSEK, the second factor is “out-of-band” with respect to GCP. GCP project owner permissions alone cannot compromise the data. For a small number of highly sensitive storage buckets one could store the CSEK in an air-gapped laptop with strong physical security measures. For a scalable solution for the enterprise, an encryption key management solution is likely required to manage all the CSEKs.

SSE on GCP involves a Data Encryption Key (DEK) which actually encrypts the data, and a Key Encryption Key (KEK) which protects the DEK. In CMEK, Google KMS supplies the KEK as directed by the customer.

CSEK requires the user to supply an AES-256 key which acts as the KEK used to protect the DEKs encrypting the data. The user generates the KEK using a standard library like openssl and then passes the key to the server to encrypt the DEKs which are generated server-side.

We will now walk through the steps to perform CSEK on GCP storage. First, we’ll need to define some environment variables. Since there are several ways that an attacker can brute-force project IDs, roles, and buckets, we will follow the best practice of adding a random suffix to the friendly names of resources.

How to Encrypt Google Cloud Storage with CSEK

We will assume you have a default project where you are a project owner and billing is enabled called PROJECT_ID. We will demonstrate another best practice of separation of concerns (SoC) by setting up everything using the user jen-admin in PROJECT_ID, but delegating the encryption/decryption operations to a service account operating in a separate EXTERNAL_PROJECT_ID. Typically, the encryption service account would be created in PROJECT_ID and attached to a VM in EXTERNAL_PROJECT_ID and created as follows:

gcloud compute instances create --service-account=$SERVICE_ACCOUNT

For this demo, we will simply use the Google Cloud Shell in the EXTERNAL_PROJECT_ID. If you are following along, you can change PROJECT_ID to an existing project where you are the owner, or you can create a new project and enable billing on it. We will be switching between the terminal for PROJECT_ID and a GCP Cloud Shell for EXTERNAL_PROJECT_ID. To create a project, your owner account will also need to have `projectCreator` permission. If not, then you can set EXTERNAL_PROJECT_ID to be PROJECT_ID.

1. To start, paste the following into the terminal.
gcloud config set project $PROJECT_ID
gcloud config set compute/zone us-central1-f
gsutil mb gs://${BUCKET_NAME}
gcloud iam service-accounts create $SERVICE_NAME
2. Now we create the EXTERNAL_PROJECT_ID and a service account for it (still using jen-admin in the terminal).
gcloud projects create $EXTERNAL_PROJECT_ID
3. We now have a service account, but it presently has no permissions. We could add them using gcloud, but until you have all the roles and permissions memorized, it is easier to perform this step in the UI. Notice that the project does not appear in the “Recent” list until you start typing the name of the new project. Similarly, for the role we just created, it does not show up under IAM until we click the “ADD” button since it has no privileges on the project yet.




If you prefer to stick with the command-line, we could search for pre-defined roles related to storage as follows:
$gcloud iam roles list | grep -i storage4. About 20 predefined roles are returned. We will choose between storage.objectAdmin and storage.objectCreator. The permissions for each role can be viewed with the describe command.

Here are the examples of the outputs:

$ gcloud iam roles describe roles/storage.objectAdmin

$ gcloud iam roles describe roles/storage.objectCreator

description: Full control of GCS objects.

etag: AA==


- resourcemanager.projects.get

- resourcemanager.projects.list

- storage.objects.create

- storage.objects.delete

- storage.objects.get

- storage.objects.getIamPolicy

- storage.objects.list

- storage.objects.setIamPolicy

- storage.objects.update

name: roles/storage.objectAdmin

stage: GA

title: Storage Object Admin

description: Access to create objects in GCS.

etag: AA==


- resourcemanager.projects.get

- resourcemanager.projects.list

- storage.objects.create

name: roles/storage.objectCreator

stage: GA

title: Storage Object Creator


5. Choose objectAdmin if you need to overwrite items. If the service account is for an audit log, there is no need to delete storage objects. To create such an account, choose objectCreator.

Note that the IAM role applies to the bucket as a whole. This is recommended, but if you need different permissions at an object level, you would use bucket ACLs instead of bucket IAM policies. We bind the IAM role to the service account as follows:
gcloud projects add-iam-policy-binding $PROJECT_ID   --member serviceAccount:$SERVICE_ACCOUNT   --role roles/${STORAGE_ROLE}

If you were instead using CMEK, you would want to create a custom role that combines the storage role and a KMS role, probably using the predefined cloudkms.cryptoKeyEncrypterDecrypter permission. The latter role allows use of encryption keys which would be supplied as the KEK instead of the CESK KEK we will create below. With CSEK, the client is in full control of the keys, so no KMS permissions are required.

6. For some GCP resources like VMs, the permissions granted to the principal is sufficient. For resources with resource-based IAM policies, it is necessary to grant access on the resource as well. Only the bucket creator has default access. We tell our bucket to allow the service account to use it as follows:
gsutil iam ch serviceAccount:${SERVICE_ACCOUNT}:objectCreator gs://$BUCKET_NAME 7. We will now simulate attaching the service account to a VM in an external project by using the Cloud Shell for the EXTERNAL_PROJECT_ID. Initially, we are logged in as jen-admin (or whomever your owner user is).

First, we will paste over all our resource variable names. Be careful not to paste any values with ${RANDOM} in it as you will get new values. Use echo $PROJECT_ID in the original terminal to get its value, for example. Then, jen-admin will request a service account key, activate the service account, and configure gsutil which creates a ~/.boto file. The ~/.boto file is where the CSEK encryption_key can be specified.

Google cloud platform
gcloud iam service-accounts keys create key.json --iam-account $SERVICE_ACCOUNT
gcloud auth activate-service-account --key-file key.json
gcloud init

8. When prompted, select [1] Re-initialize. Then select the SERVICE_ACCOUNT and PROJECT_ID. The init creates a ~/.boto file. The prompt does not change, however, and you can test who is the current user with gcloud auth list and not the * next to the active account. Boto configuration is not supported for user accounts, only service accounts. There does not appear to be a way of using CSEK apart from service accounts. CMEK on the other hand offers the gsutil kms --encryption-key $KEY_PATH_ID $BUCKET_NAME option in addition to ~/.boto for service accounts. Now we will generate the CMES KEK.
python3 -c 'import base64; import os;
sample_output >
9. Find the commented out # encryption_key = line under the [GSUtil] section in the ~/.boto file and replace it with your version of the following:
encryption_key = 39So8jZi8tSi/vgr9F3bBsCJOV3I//UoqbtWGbWVvN0=
Note the leading b and trailing \n are stripped out.

10. At this point, we are finally ready to encrypt our data. First, we will create some test data and copy it to our bucket.
echo “test data” > test.txt
gsutil cp test.txt gs://$BUCKET_NAME

We can confirm that this object was indeed encrypted with our CSEK. The encryption_key in the .boto file is also used for decryption, until the decryption_key1 is defined.

bucket details

With CSEK encryption in place, we now have a two-factor protection of sorts for our data. The location of the ~/.boto file with the encryption key is entirely up to the customer and out of band to GCP privileges. Obviously, this key should be better protected, perhaps with an enterprise key management system. If it were moved to an air-gapped system, then even complete GCP project takeover would not allow an attacker to decrypt the data in Google Cloud Storage.

Get More for Google Cloud Storage with Cloud Volumes ONTAP

While security is one of the most important aspects of enterprise storage deployment, there’s more than Google Cloud storage encryption to consider when deploying on Google Cloud. When it comes to making sure that your storage footprint is as cost- and space-efficient, protected from data loss, and easy to replicate for dev/test purposes. That’s all possible with Cloud Volumes ONTAP for Google Cloud.

With Cloud Volumes ONTAP, Google Cloud now has access to NetApp’s signature storage efficiency technologies, instant NetApp Snapshot™ copies, FlexClone® data clone capabilities, and full hybrid integration.

New call-to-action
Yifat Perry, Technical Content Manager

Technical Content Manager