Secrets presents a challenging dilemma for infrasture-as-a-code. Solution today converge mostly on storing these secrets in some external trusted system (Kube Secrets, Docker Secrets, Build System Secrest, Vault) outside of the code.

Using SOPS, we can check in the encrypted secrets (e.g. connection passwords) along with the code. The only thing out of sight is the encryption key wrapping these passwords.

The added value also is that SOPS understand structured file (JSON,YAML) and encrypt only values, leaving the keys intact to easily inspect.

Example – This file could be checked into github along with the code.

  "postgres_password": "ENC[AES256_GCM,data:PsyN..,type:str]",
  "aws_access_key": "ENC[AES256_GCM,data:PsyN..,type:str]",
Simple and flexible tool for managing secrets. Contribute to mozilla/sops development by creating an account on GitHub.



$ brew install sops


  1. Go to the latest release page:
  2. Download sops-v3.5.0.exe (or whatever the latest version is)
  3. Rename the file from sops-v3.5.0.exe to just sops.exe
  4. Copy the file sops.exe to C:\Windows\System32.
  5. Or – Alternately, put the file in any directory and set the Path environment variable accordingly

Encryption Key Configuration

The most important configuration for SOPS is what encryption key to use. In many case this is the only configuration needed.

Create a .sops.yaml at the root directory of the project with one of the configuration outlined below.

The key could be any PGP key. Alternately the key could also be provided by a Key Management Service (KMS) in AWS or Google Cloud. A good choice especially if the code eventually would be deployed to these cloud providers.


A master key can be created in hardware security modules via AWS Key Management Service (KMS). We need arn of the key with a corresponding AWS Profile that has a permission to use the key.

$ aws --profile myprofile configure
AWS Access Key ID [None]: KEY_ID
AWS Secret Access Key [None]: SECRET_ACCESS_KEY

# Create .sops.yaml At project root
$ vi .sops.yaml
  # If assuming roles for another account use "arn+role_arn".
  # See Advanced usage
  - kms: "arn:aws:kms:..."
    aws_profile: myprofile

The permission needed for key operations are:

"Action": [

For slightly more advanced use cases. We could also access master key from another account using AWS AssumeRole mechanism. This is particularly useful when we have one master key in production, but wanted to also access it from our staging AWS account. See

PGP (optionally via Keybase)

For personal use cases, using a PGP key probably suffice. If you are a keybase user, you already have a PGP key. We needed to do the followings

  1. Install GPG
  2. Export private keys from Keybase
  3. Import the keys to local machine
  4. (optional) Remove passphrase from the key
  5. Create .sops.yaml at the project
$ brew install gpg
$ gpg --import private_key.asc

... (take note of the key ID: 0701C740FB8D24E9)
gpg: key 0701C740FB8D24E9: secret key imported

# This is important for the passphrase screen to show up in console
$ export GPG_TTY=$(tty)
$ gpg --edit-key 0701C740FB8D24E9
gpg> passwd
# 1. Type current passphrase
# 2. Type "" (Blank)
# 3. Type "" (Confirm Blank)

$ vi .sops.yaml
  - pgp: 0701C740FB8D24E9

Encrypt / Decrypt files


Create a new file via sops will launch an editor

$ sops secret.enc.json
  "example_key": "example_value",

The resulting json would retain the same keys, with values encrypted. An extra metadata is added as an extra key in JSON

$ cat secret.enc.json                                                                                            
	"example_key": "ENC[AES256_GCM,data:PsyNr6jRJLIPN3P0tA==,iv:Ne63tk8f6uD9GLiHQoyrS/BrK4WL2I6+9Ul8nO6PkDw=,tag:rF8Hm4gm0+xlA3BqKivY7w==,type:str]",
	"sops": {
        ... (metadata here)


Editing an existing files would launch and editor and encrypt the file.

$ sops secret.enc.json

Encrypt / Decrypt Existing Files

$ sops -e secret.json > secret.sops.json

$ sops -d secret.sops.json > secret.sops

⚠️ Important - If the PGP key has passphrase, make sure this environment variables is set or you will run into problems

Cannot decrypt with GPG 2.2.5 and SOPS 3.0.0 · Issue #304 · mozilla/sops
It appears the utility is looking for a secret key in a file but my GPG installation (through macOS homebrew) uses the gpg-agent. I cannot decrypt files as demonstrated below. $ sops --version sops...

Integration Recipes

Many tools provide a plugin to directly read and write encrypted SOPS file.


Download the following provider plugin from github

A Terraform provider for reading Mozilla sops files - carlpett/terraform-provider-sops
$ mkdir -p ~/.terraform.d/plugins
$ curl -L -o ~/.terraform.d/plugins/ \

$ unzip ~/.terraform.d/plugins/ \
-d ~/.terraform.d/plugins

$ terraform init

Then we could define a data resource that automatically decrypt SOPS json.

provider "sops" {}

data "sops_file" "secrets" {
  source_file = "secrets.enc.json"

## Using
provider "aws" {
  region = "us-west-2"
  access_key =["aws_access_key"]
  secret_key =["aws_secret_key"]

Encrypted Private Keys

SOPs works with any unstrutured files as well. The data will get encoded into a data key in a resulting json automatically.

$ sops -e private_key > private_key.sops
$ cat private_key.sops
  "data": "ENC[AES256_GCM,data:bVr....."
  "sops:: { ... }
$ rm private_key

Using the keys, we could just pipe the decrypted result to ssh-add without writing to file first.

ssh-add - <<< $(sops -d private_key.sops)

Python Script

SOPS used to be written in python, but reimplemented in golang. The pip package exists but with many features missing. It is probably better to call it via subprocess.

import subprocess
b = subprocess.check_output(['sops', "-d", "private_key.sops"])

Kubernetes Secrets

Create a new yaml file, but indicates to SOPs that only data and stringData are keys to encrypt

$ sops --encrypted-regex '^(data|stringData)$' secrets-mysecret.yaml

apiVersion: v1
kind: Secret
    name: mysecret
type: Opaque
  mySecret: hello123

Apply by pipe the decrypted output to K8s

sops -d secrets-mysecret.yaml | kubectl -n workflow apply -f -