Kubernetes Engine and its underlying container model provide increased scalability and manageability for applications hosted in the Cloud. It's easier than ever to launch flexible software applications according to the runtime needs of your system.

This flexibility, however, can come with new challenges. In such environments, it can be difficult to ensure that every component is built, tested, and released according to your best practices and standards, and that only authorized software is deployed to your production environment.

Binary Authorization (BinAuthz) is a service that aims to reduce some of these concerns by adding deploy-time policy enforcement to your Kubernetes Engine cluster. Policies can be written to require one or more trusted parties (called "attestors") to approve of an image before it can be deployed. For a multi-stage deployment pipeline where images progress from development to testing to production clusters, attestors can be used to ensure that all required processes have completed before software moves to the next stage.

The identity of attestors is established and verified using PGP-format public keys, and attestations are digitally signed with the corresponding PGP private key signatures. This ensures that only trusted parties can authorize deployment of software in your environment.

At deployment time, Binary Authorization enforces the policy you defined by checking that the container image has passed all required constraints -- including that all required attestors have verified that the image is ready for deployment. If the image passes, the service allows it to be deployed. Otherwise, deployment is blocked and the image cannot be deployed until it is compliant.

What You'll Build

This codelab describes how to secure a GKE cluster using Binary Authorization. To do this, we will create a policy that all deployments must conform to, and apply it to the cluster. As part of the policy creation, we will create an attestor that can verify container images, and use it to sign and run a custom image.

The purpose of this codelab is to give a brief overview of how container signing works with Binary Authorization. With this knowledge, you should feel comfortable building a secure CI/CD pipeline, secured by trusted attestors.

What You'll Learn

What You'll Need

Because Binary Authorization concerns the security of your infrastructure, it will typically be interacted with by multiple people with different responsibilities. In this codelab, you will be acting as all of them. Before getting started, it's important to explain the different roles you'll be taking on:

Deployer:

Policy Creator:

Attestor

Each of these roles can represent an individual person, or a team of people in your organization. In a production environment, these roles would likely be managed by separate Google Cloud Platform (GCP) projects, and access to resources would be shared between them in a limited fashion using Cloud IAM.

As a Deployer:

Setting up the Environment

This codelab can be completed through your web browser using Google Cloud Shell. Click the following link to open a new session:

Open Google Cloud Shell

Enroll in the Alpha Whitelist

Because Binary Authorization has not yet been publicly released, you must sign up to have your account whitelisted before trying it out. Approval into the whitelist should be automatic.

Join Alpha Whitelist

Setting Your Project

Our first step is to set the GCP project you want to run the codelab under. You can find a list of the projects under your account with the following command:

gcloud projects list

When you know which project you want to use, set it in an environment variable so we can use it for the rest of the codelab:

PROJECT_ID=<YOUR_CHOSEN_PROJECT_ID>
gcloud config set project $PROJECT_ID

Creating a Working Directory

Through the course of this codelab, we will be creating a few configuration files. You may want to create a new directory to work out of:

mkdir binauthz-codelab ; cd binauthz-codelab

Enabling the APIs

Before using Binary Authorization, you must enable the relevant APIs on your GCP project:

//enable GKE to create and manage your cluster
gcloud services enable container.googleapis.com
//enable BinAuthz to manage a policy on the cluster
gcloud services enable binaryauthorization.googleapis.com

Alternatively, you can also enable the APIs for your project through the Google Cloud Platform API Library.

Setting up a Cluster

Next, set up a Kubernetes cluster for our project through Kubernetes Engine. The following command will create a new cluster named "binauthz-codelab", located in the zone us-central1-a:

gcloud container clusters create binauthz-codelab --zone us-central1-a

Now, we can change our local environment to the cluster so we can interact with it locally using kubectl:

gcloud container clusters get-credentials binauthz-codelab --zone us-central1-a

Running a Pod

Now, let's add a container to the new cluster. The following command will create a simple Dockerfile we can use:

cat << EOF > Dockerfile
   FROM alpine
   CMD tail -f /dev/null
EOF

This container will do nothing but run the "tail -f /dev/null" command, which will cause it to wait forever. It's not a particularly useful container, but it will allow us to test the security of our cluster.

Now, let's build the container and push it to Google Container Registry (GCR):

#set the GCR path we will use to host the container image
CONTAINER_PATH=us.gcr.io/$PROJECT_ID/hello-world

#build container
docker build -t $CONTAINER_PATH ./

#push to GCR
gcloud auth configure-docker --quiet
docker push $CONTAINER_PATH

You should now be able to see the newly created container in the Container Registry web interface.

Now, let's run it on our cluster:

kubectl run hello-world --image $CONTAINER_PATH

If everything worked well, our container should be silently running.

You can verify this by listing the running pods:

kubectl get pod

As a Policy Creator:

Adding a Policy

We now have a cluster set up and running our code. Now, let's secure the cluster with a policy.

The first step is to enable Binary Authorization on the cluster:

gcloud beta container clusters update binauthz-codelab \
    --enable-binauthz --zone us-central1-a

Create a policy file:

cat > ./policy.yaml << EOM
    admissionWhitelistPatterns:
    - namePattern: gcr.io/google_containers/*
    - namePattern: gcr.io/google-containers/*
    - namePattern: k8s.gcr.io/*
    defaultAdmissionRule:
      evaluationMode: ALWAYS_DENY
      enforcementMode: ENFORCED_BLOCK_AND_AUDIT_LOG
EOM

This policy is relatively simple. The admissionWhitelistPatterns section defines a whitelist. In this case, it allows containers hosted on official repositories to always run on the cluster. These repositories stores images that are required for Kubernetes Engine, so adding them to a whitelist is required for typical cluster operations. Additionally, the policy declares a defaultAdmissionRule that states that all other pods will be rejected.

Now, we can apply the policy to our cluster:

gcloud beta container binauthz policy import policy.yaml

As a Deployer:

Testing the Policy

Now, our policy should prevent any custom container images from being deployed on the cluster. We can verify this by deleting our pod and attempting to run it again:

kubectl delete deployment --all
kubectl delete event --all
kubectl run hello-world --image $CONTAINER_PATH

If you check the cluster for pods, you should notice that no pods are running this time:

kubectl get pods

You may need to run the command a second time to see the pods disappear. kubectl checked the pod against the policy, found that it doesn't conform to the rules, and rejected it.

You can see the rejection listed as a kubectl event:

kubectl get event --template \
 '{{range.items}}{{"\033[0;36m"}}{{.reason}}:{{"\033[0m"}}{{.message}}{{"\n"}}{{end}}'

Attestors in Binary Authorization are implemented on top of the Cloud Container Analysis API, so it is important to describe how that works before going forward. The Container Analysis API was designed to allow you to associate metadata with specific container images.

As an example, a Note might be created to track the Heartbleed vulnerability. Security vendors would then create scanners to test container images for the vulnerability, and create an Occurrence associated with each compromised container.

Along with tracking vulnerabilities, Container Analysis was designed to be a generic metadata API. Binary Authorization utilizes Container Analysis to associate signatures with the container images they are verifying. A Container Analysis Note is used to represent a single attestor, and Occurrences are created and associated with each container that attestor has approved.

The Binary Authorization API uses the concepts of "attestors" and "attestations", but these are implemented using corresponding Notes and Occurrences in the Container Analysis API.

Currently, our cluster will perform a catch-all rejection on all images that don't reside at an official repository. Our next step is to create an attestor, so we can selectively allow trusted containers.

As an Attestor:

Creating a Container Analysis Note

Start by creating a JSON file containing this data for your Note. This command will create a JSON file containing your Note locally:

NOTE_ID=my-attestor-note

cat > ./create_note_request.json << EOM
{
  "name": "projects/${PROJECT_ID}/notes/${NOTE_ID}",
  "attestation_authority": {
    "hint": {
      "human_readable_name": "This note represents an attestation authority"
    }
  }
}
EOM

This command will create a JSON file containing our Note locally.

Now, submit the Note to our project using the Container Analysis API:

curl -vvv -X POST \
    -H "Content-Type: application/json"  \
    -H "Authorization: Bearer $(gcloud auth print-access-token)"  \
    --data-binary @./create_note_request.json  \
    "https://containeranalysis.googleapis.com/v1alpha1/projects/${PROJECT_ID}/notes/?noteId=${NOTE_ID}"

We can verify the Note was saved by fetching it back:

curl -vvv  \
    -H "Authorization: Bearer $(gcloud auth print-access-token)" \
    "https://containeranalysis.googleapis.com/v1alpha1/projects/${PROJECT_ID}/notes/${NOTE_ID}"

Creating an Attestor in Binary Authorization

Now, our Note is saved within the Container Analysis API. To make use of our attestor, we must also register the note with Binary Authorization:

ATTESTOR_ID=my-binauthz-attestor

gcloud beta container binauthz attestors create $ATTESTOR_ID \
    --attestation-authority-note=$NOTE_ID \
    --attestation-authority-note-project=$PROJECT_ID

To verify everything works as expected, you can print out the list of registered authorities:

gcloud beta container binauthz attestors list

Adding IAM Role

Before we can use this attestor, we must grant Binary Authorization the appropriate permissions to view the Container Analysis Note we created. This will allow Binary Authorization to query the Container Analysis API to ensure that each pod has been signed and approved to run.

Permissions in Binary Authorization are handled through an automatically generated service account.

First, we need to find the service account's email address:

PROJECT_NUMBER=$(gcloud projects describe "${PROJECT_ID}"  --format="value(projectNumber)")
BINAUTHZ_SA_EMAIL="service-${PROJECT_NUMBER}@gcp-sa-binaryauthorization.iam.gserviceaccount.com"

Now, we can use it to create a Container Analysis IAM JSON request:

cat > ./iam_request.json << EOM
{
  'resource': 'projects/$PROJECT_ID/notes/$NOTE_ID',
  'policy': {
    'bindings': [
      {
        'role': 'roles/containeranalysis.notes.occurrences.viewer',
        'members': [
          'serviceAccount:$BINAUTHZ_SA_EMAIL'
        ]
      }
    ]
  }
}
EOM

We can make a curl request to grant the necessary IAM role:

curl -X POST  \
    -H "Content-Type: application/json" \
    -H "Authorization: Bearer $(gcloud auth print-access-token)" \
    --data-binary @./iam_request.json \
    "https://containeranalysis.googleapis.com/v1alpha1/projects/$PROJECT_ID/notes/$NOTE_ID:setIamPolicy"

Adding a PGP Key

Finally, our authority needs to create a cryptographic key pair that can be used to sign container images. We can do this through gpg, which is available through the Cloud Shell.

For gpg to work properly, it requires a source of entropy to feed its algorithm with random bytes. This can be hard to come by when running in an isolated VM environment like Cloud Shell. To fix this, we can run a random number generator as a background process:

sudo apt-get install rng-tools -y

sudo rngd -r /dev/urandom

Generate the cryptographic keys:

gpg --batch --gen-key <(
    cat <<- EOF
      Key-Type: RSA
      Key-Length: 2048
      Name-Real: Demo Signing Role
      Name-Email: attestor@example.com
      %commit
EOF
)

After you have created your keys, you can safely stop the entropy generation process:

sudo kill -9 $(pidof rngd)

Now, pull the public key out of gpg and save it to our working directory:

gpg --armor --export attestor@example.com> ./public.pgp

Associate the key with our authority through the gcloud binauthz command:

gcloud beta container binauthz attestors public-keys add \
    --attestor=$ATTESTOR_ID  --public-key-file=./public.pgp

If we print the list of authorities again, you should now see a key registered:

gcloud beta container binauthz attestors list

Note that multiple keys can be registered for each authority. This can be useful if the authority represents a team of people. For example, anyone in the QA team could act as the QA Attestor, and sign with their own individual private key.

As an Attestor:

Now that we have our authority set up and ready to go, we can use it to sign the container image we built previously.

Creating the Signature

An attestation must include a cryptographic signature to state that a particular container image has been verified by the attestor and is safe to run on your cluster. To specify which container image to attest, we need to determine its digest. You can find the digest for a particular container tag hosted in Container Registry using gcloud:

DIGEST=$(gcloud container images describe ${CONTAINER_PATH}:latest \
    --format='get(image_summary.digest)')

Now, we can create a payload using the binauthz command:

gcloud beta container binauthz create-signature-payload \
    --artifact-url="${CONTAINER_PATH}@${DIGEST}"  > ./payload.json

The payload is simply a JSON file used to represent the specific container image we want to verify. If you open up the generated file, you'll see it contains the container path, the digest of the image, and some other metadata.

cat payload.json

Now, we can sign the payload, representing our approval of the associated container image:

gpg \
    --local-user  attestor@example.com \
    --armor \
    --output ./signature.pgp \
    --sign ./payload.json

To make verification easier, along with the signature we'll need to find the fingerprint (a unique id) of our public key, and store it in an environment variable:

KEY_FINGERPRINT=$(gpg --list-keys attestor@example.com | sed -n '2p')

Creating an Attestation in the Cloud

Now, we can push up our attestation to the cloud:

gcloud beta container binauthz attestations create \
   --artifact-url="${CONTAINER_PATH}@${DIGEST}" \
   --attestor=$ATTESTOR_ID \
   --attestor-project=$PROJECT_ID \
   --signature-file=./signature.pgp  \
   --pgp-key-fingerprint="$KEY_FINGERPRINT"

This will create a new occurrence, and attach it to our attestor's note. To verify everything worked as expected, we can list our attestations:

gcloud beta container binauthz attestations list \
   --attestor=$ATTESTOR_ID --attestor-project=$PROJECT_ID

Now that we have our image securely verified by an attestor, let's get it running on the cluster.

As a Policy Creator:

Updating the Policy

Currently, our cluster is running a policy with one rule: allow containers from official repositories, and reject all others.

Change it to allow any images verified by the attestor:

cat << EOF > updated_policy.yaml
    admissionWhitelistPatterns:
    - namePattern: gcr.io/google_containers/*
    - namePattern: k8s.gcr.io/*
    defaultAdmissionRule:
      evaluationMode: REQUIRE_ATTESTATION
      enforcementMode: ENFORCED_BLOCK_AND_AUDIT_LOG
      requireAttestationsBy:
      - projects/$PROJECT_ID/attestors/$ATTESTOR_ID
EOF

You should now have a new file on disk, called updated_policy.yaml. Now, instead of the default rule rejecting all images, it first checks our attestor for verifications.

Upload the new policy to Binary Authorization:

gcloud beta container binauthz policy import updated_policy.yaml

As a Deployer:

Running the Verified Image

Now, let's attempt to run our verified image:

#run signed image
kubectl run hello-world-signed --image "${CONTAINER_PATH}@${DIGEST}"

#verify pod is running
kubectl get pods

You should see your pod has passed the policy and is running on the cluster.

Congratulations! You can now make specific security guarantees for your cluster by adding more complex rules to the policy.

Delete the cluster:

gcloud container clusters delete binauthz-codelab --zone us-central1-a

Delete the container image:

gcloud container images delete $CONTAINER_PATH@$DIGEST --force-delete-tags

Delete the Attestor:

gcloud beta container binauthz attestors delete my-binauthz-attestor

Delete the Container Analysis resources:

curl -vvv -X DELETE  \
    -H "Authorization: Bearer $(gcloud auth print-access-token)" \
    "https://containeranalysis.googleapis.com/v1alpha1/projects/${PROJECT_ID}/notes/${NOTE_ID}"