Secure Build & Deploy with Cloud Build, Artifact Registry and GKE

1. Introduction

Container Analysis provides vulnerability scanning and metadata storage for containers. The scanning service performs vulnerability scans on images in Artifact Registry and Container Registry, then stores the resulting metadata and makes it available for consumption through an API. Metadata storage lets you store information from different sources, including vulnerability scanning, Google Cloud services, and third-party providers.

Vulnerability scanning can occur automatically or on-demand:

  • When automatic scanning is enabled, scanning triggers automatically every time you push a new image to Artifact Registry or Container Registry. Vulnerability information is continuously updated when new vulnerabilities are discovered.
  • When On-Demand Scanning is enabled, you must run a command to scan a local image or an image in Artifact Registry or Container Registry. On-Demand Scanning gives you flexibility around when you scan containers. For example, you can scan a locally-built image and remediate vulnerabilities before storing it in a registry. Scanning results are available for up to 48 hours after the scan is completed, and vulnerability information is not updated after the scan.

With Container Analysis integrated into your CI/CD pipeline, you can make decisions based on that metadata. For example, you can use Binary Authorization to create deployment policies that only allow deployments for compliant images from trusted registries.

What you'll learn

  • How to enable automatic scanning
  • How to perform On-Demand Scanning
  • How to integrate scanning in a build pipeline
  • How to sign approved images
  • How to use GKE Admission controllers to block images
  • How to configure GKE to allow only signed approved images

2. Setup and Requirements

Self-paced environment setup

  1. Sign-in to the Google Cloud Console and create a new project or reuse an existing one. If you don't already have a Gmail or Google Workspace account, you must create one.

b35bf95b8bf3d5d8.png

a99b7ace416376c4.png

bd84a6d3004737c5.png

  • The Project name is the display name for this project's participants. It is a character string not used by Google APIs. You can update it at any time.
  • The Project ID is unique across all Google Cloud projects and is immutable (cannot be changed after it has been set). The Cloud Console auto-generates a unique string; usually you don't care what it is. In most codelabs, you'll need to reference the Project ID (it is typically identified as PROJECT_ID). If you don't like the generated ID, you may generate another random one. Alternatively, you can try your own and see if it's available. It cannot be changed after this step and will remain for the duration of the project.
  • For your information, there is a third value, a Project Number which some APIs use. Learn more about all three of these values in the documentation.
  1. Next, you'll need to enable billing in the Cloud Console to use Cloud resources/APIs. Running through this codelab shouldn't cost much, if anything at all. To shut down resources so you don't incur billing beyond this tutorial, you can delete the resources you created or delete the whole project. New users of Google Cloud are eligible for the $300 USD Free Trial program.

Start Cloudshell Editor

This lab was designed and tested for use with Google Cloud Shell Editor. To access the editor,

  1. access your google project at https://console.cloud.google.com.
  2. In the top right corner click on the cloud shell editor icon

8560cc8d45e8c112.png

  1. A new pane will open in the bottom of your window

Environment Setup

In Cloud Shell, set your project ID and the project number for your project. Save them as PROJECT_ID and PROJECT_ID variables.

export PROJECT_ID=$(gcloud config get-value project)
export PROJECT_NUMBER=$(gcloud projects describe $PROJECT_ID \
    --format='value(projectNumber)')

Enable services

Enable all necessary services:

gcloud services enable \
  cloudkms.googleapis.com \
  cloudbuild.googleapis.com \
  container.googleapis.com \
  containerregistry.googleapis.com \
  artifactregistry.googleapis.com \
  containerscanning.googleapis.com \
  ondemandscanning.googleapis.com \
  binaryauthorization.googleapis.com 

Create Artifact Registry Repository

In this lab you will be using Artifact Registry to store and scan your images. Create the repository with the following command.

gcloud artifacts repositories create artifact-scanning-repo \
  --repository-format=docker \
  --location=us-central1 \
  --description="Docker repository"

Configure docker to utilize your gcloud credentials when accessing Artifact Registry.

gcloud auth configure-docker us-central1-docker.pkg.dev

3. Automated Scanning

Artifact scanning triggers automatically every time you push a new image to Artifact Registry or Container Registry. Vulnerability information is continuously updated when new vulnerabilities are discovered. In this section you'll push an image to the Artifact Registry and explore the results.

Create and change into a work directory

mkdir vuln-scan && cd vuln-scan

Define a sample image

Create a file called Dockerfile with the following contents.

cat > ./Dockerfile << EOF
FROM gcr.io/google-appengine/debian9@sha256:ebffcf0df9aa33f342c4e1d4c8428b784fc571cdf6fbab0b31330347ca8af97a

# System
RUN apt update && apt install python3-pip -y

# App
WORKDIR /app
COPY . ./

RUN pip3 install Flask==1.1.4
RUN pip3 install gunicorn==20.1.0

CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 --timeout 0 main:app

EOF

Create a file called main.py with the following contents

cat > ./main.py << EOF
import os
from flask import Flask

app = Flask(__name__)

@app.route("/")
def hello_world():
    name = os.environ.get("NAME", "Worlds")
    return "Hello {}!".format(name)

if __name__ == "__main__":
    app.run(debug=True, host="0.0.0.0", port=int(os.environ.get("PORT", 8080)))
EOF

Build and Push the image to AR

Use Cloud Build to build and automatically push your container to Artifact Registry. Note the tag bad on the image. This will help you identify it for later steps.

gcloud builds submit . -t us-central1-docker.pkg.dev/${PROJECT_ID}/artifact-scanning-repo/sample-image:bad

Review Image Details

Once the build process has completed review the image and Vulnerability results in the Artifact Registry dashboard.

  1. Open Artifact Registry in the Cloud Console
  2. Click on the artifact-scanning-repo to view the contents
  3. Click into the image details
  4. Click into the latest digest of your image
  5. Once the scan has finished click on the vulnerabilities tab for the image

From the vulnerabilities tab you will see the results of the automatic scanning for the image you just built.

361be7b3bf293fca.png

Automating scanning is enabled by default. Explore the Artifact Registry Settings to see how you can turn off/on auto scanning.

4. On-Demand Scanning

There are various scenarios where you may need to run a scan before pushing the image to a repository. As an example, a container developer may scan an image and fix the issues, before pushing code to the source control. In the example below you will build and analyze the image locally before acting on the results.

Build an Image

In this step you will use local docker to build the image to your local cache.

docker build -t us-central1-docker.pkg.dev/${PROJECT_ID}/artifact-scanning-repo/sample-image .

Scan the image

Once the image has been built, request a scan of the image. The results of the scan are stored in a metadata server. The job completes with a location of the results in the metadata server.

gcloud artifacts docker images scan \
    us-central1-docker.pkg.dev/${PROJECT_ID}/artifact-scanning-repo/sample-image \
    --format="value(response.scan)" > scan_id.txt

Review Output File

Take a moment to review the output of the previous step which was stored in the scan_id.txt file. Notice the report location of the scan results in the metadata server.

cat scan_id.txt

Review detailed scan results

To view the actual results of the scan use the list-vulnerabilities command on the report location noted in the output file.

gcloud artifacts docker images list-vulnerabilities $(cat scan_id.txt) 

The output contains a significant amount of data about all the vulnerabilities in the image.

Flag Critical issues

Humans rarely use the data stored in the report directly. Typically the results are used by an automated process. Use the commands below to read the report details and log if any CRITICAL vulnerabilities were found

export SEVERITY=CRITICAL

gcloud artifacts docker images list-vulnerabilities $(cat scan_id.txt) --format="value(vulnerability.effectiveSeverity)" | if grep -Fxq ${SEVERITY}; then echo "Failed vulnerability check for ${SEVERITY} level"; else echo "No ${SEVERITY} Vulnerabilities found"; fi

The output from this command will be

Failed vulnerability check for CRITICAL level

5. Build Pipeline Scanning

In this section you will create an automated build pipeline that will build your container image, scan it then evaluate the results. If no CRITICAL vulnerabilities are found it will push the image to the repository. If CRITICAL vulnerabilities are found the build will fail and exit.

Provide access for Cloud Build Service Account

Cloud Build will need rights to access the on-demand scanning api. Provide access with the following commands.

gcloud projects add-iam-policy-binding ${PROJECT_ID} \
        --member="serviceAccount:${PROJECT_NUMBER}@cloudbuild.gserviceaccount.com" \
        --role="roles/iam.serviceAccountUser"
        
gcloud projects add-iam-policy-binding ${PROJECT_ID} \
        --member="serviceAccount:${PROJECT_NUMBER}@cloudbuild.gserviceaccount.com" \
        --role="roles/ondemandscanning.admin"

Create the Cloud Build pipeline

The following command will create a cloudbuild.yaml file in your directory that will be used for the automated process. For this example the steps are limited to the container build process. In practice however you would include application specific instructions and tests in addition to the container steps.

Create the file with the following command.

cat > ./cloudbuild.yaml << EOF
steps:

# build
- id: "build"
  name: 'gcr.io/cloud-builders/docker'
  args: ['build', '-t', 'us-central1-docker.pkg.dev/${PROJECT_ID}/artifact-scanning-repo/sample-image', '.']
  waitFor: ['-']

#Run a vulnerability scan at _SECURITY level
- id: scan
  name: 'gcr.io/cloud-builders/gcloud'
  entrypoint: 'bash'
  args:
  - '-c'
  - |
    (gcloud artifacts docker images scan \
    us-central1-docker.pkg.dev/${PROJECT_ID}/artifact-scanning-repo/sample-image \
    --location us \
    --format="value(response.scan)") > /workspace/scan_id.txt

#Analyze the result of the scan
- id: severity check
  name: 'gcr.io/cloud-builders/gcloud'
  entrypoint: 'bash'
  args:
  - '-c'
  - |
      gcloud artifacts docker images list-vulnerabilities \$(cat /workspace/scan_id.txt) \
      --format="value(vulnerability.effectiveSeverity)" | if grep -Fxq CRITICAL; \
      then echo "Failed vulnerability check for CRITICAL level" && exit 1; else echo "No CRITICAL vulnerability found, congrats !" && exit 0; fi

#Retag
- id: "retag"
  name: 'gcr.io/cloud-builders/docker'
  args: ['tag',  'us-central1-docker.pkg.dev/${PROJECT_ID}/artifact-scanning-repo/sample-image', 'us-central1-docker.pkg.dev/${PROJECT_ID}/artifact-scanning-repo/sample-image:good']


#pushing to artifact registry
- id: "push"
  name: 'gcr.io/cloud-builders/docker'
  args: ['push',  'us-central1-docker.pkg.dev/${PROJECT_ID}/artifact-scanning-repo/sample-image:good']

images:
  - us-central1-docker.pkg.dev/${PROJECT_ID}/artifact-scanning-repo/sample-image
EOF

Run the CI pipeline

Submit the build for processing to verify the build breaks when a CRITICAL severity vulnerability is found.

gcloud builds submit

Review Build Failure

The build you just submitted will fail because the image contains CRITICAL vulnerabilities.

Review the build failure in the Cloud Build History page

Fix the Vulnerability

Update the Dockerfile to use a base image that does not contain CRITICAL vulnerabilities.

Overwrite the Dockerfile to use the Debian 10 image with the following command

cat > ./Dockerfile << EOF
from python:3.8-slim  

# App
WORKDIR /app
COPY . ./

RUN pip3 install Flask==2.1.0
RUN pip3 install gunicorn==20.1.0

CMD exec gunicorn --bind :\$PORT --workers 1 --threads 8 main:app

EOF

Run the CI process with the good image

Submit the build for processing to verify that build will succeed when no CRITICAL severity vulnerabilities are found.

gcloud builds submit

Review Build Success

The build you just submitted will succeed because the updated image contains no CRITICAL vulnerabilities.

Review the build success in the Cloud Build History page

Review Scan results

Review the good image in Artifact registry

  1. Open Artifact Registry in the Cloud Console
  2. Click on the artifact-scanning-repo to view the contents
  3. Click into the image details
  4. Click into the latest digest of your image
  5. Click on the vulnerabilities tab for the image

6. Signing Images

Create an Attestor Note

An Attestor Note is simply a small bit of data that acts as a label for the type of signature being applied. For example one note might indicate vulnerability scan, while another might be used for QA sign off. The note will be referred to during the signing process.

Create a note

cat > ./vulnz_note.json << EOM
{
  "attestation": {
    "hint": {
      "human_readable_name": "Container Vulnerabilities attestation authority"
    }
  }
}
EOM

Store the note

NOTE_ID=vulnz_note

curl -vvv -X POST \
    -H "Content-Type: application/json"  \
    -H "Authorization: Bearer $(gcloud auth print-access-token)"  \
    --data-binary @./vulnz_note.json  \
    "https://containeranalysis.googleapis.com/v1/projects/${PROJECT_ID}/notes/?noteId=${NOTE_ID}"

Verify the note

curl -vvv  \
    -H "Authorization: Bearer $(gcloud auth print-access-token)" \
    "https://containeranalysis.googleapis.com/v1/projects/${PROJECT_ID}/notes/${NOTE_ID}"

Creating an Attestor

Attestors are used to perform the actual image signing process and will attach an occurrence of the note to the image for later verification. Create the attestor for later use.

Create Attestor

ATTESTOR_ID=vulnz-attestor

gcloud container binauthz attestors create $ATTESTOR_ID \
    --attestation-authority-note=$NOTE_ID \
    --attestation-authority-note-project=${PROJECT_ID}

Verify Attestor

gcloud container binauthz attestors list

Note the last line indicates NUM_PUBLIC_KEYS: 0 you will provide keys in a later step

Also note that Cloud Build automatically creates the built-by-cloud-build attestor in your project when you run a build that generates images. So the above command returns two attestors, vulnz-attestor and built-by-cloud-build. After images are successfully built, Cloud Build automatically signs and creates attestations for them.

Adding IAM Role

The Binary Authorization service account will need rights to view the attestation notes. Provide the access with the following API call

PROJECT_NUMBER=$(gcloud projects describe "${PROJECT_ID}"  --format="value(projectNumber)")

BINAUTHZ_SA_EMAIL="service-${PROJECT_NUMBER}@gcp-sa-binaryauthorization.iam.gserviceaccount.com"


cat > ./iam_request.json << EOM
{
  'resource': 'projects/${PROJECT_ID}/notes/${NOTE_ID}',
  'policy': {
    'bindings': [
      {
        'role': 'roles/containeranalysis.notes.occurrences.viewer',
        'members': [
          'serviceAccount:${BINAUTHZ_SA_EMAIL}'
        ]
      }
    ]
  }
}
EOM

Use the file to create the IAM Policy

curl -X POST  \
    -H "Content-Type: application/json" \
    -H "Authorization: Bearer $(gcloud auth print-access-token)" \
    --data-binary @./iam_request.json \
    "https://containeranalysis.googleapis.com/v1/projects/${PROJECT_ID}/notes/${NOTE_ID}:setIamPolicy"

Adding a KMS Key

The Attestor needs cryptographic keys to attach the note and provide verifiable signatures. In this step you will create and store keys in KMS for Cloud Build to access later.

First add some environment variables to describe the new key

KEY_LOCATION=global
KEYRING=binauthz-keys
KEY_NAME=codelab-key
KEY_VERSION=1

Create a keyring to hold a set of keys

gcloud kms keyrings create "${KEYRING}" --location="${KEY_LOCATION}"

Create a new asymmetric signing key pair for the attestor

gcloud kms keys create "${KEY_NAME}" \
    --keyring="${KEYRING}" --location="${KEY_LOCATION}" \
    --purpose asymmetric-signing   \
    --default-algorithm="ec-sign-p256-sha256"

You should see your key appear on the KMS page of the Google Cloud Console.

Now, associate the key with your attestor through the gcloud binauthz command:

gcloud beta container binauthz attestors public-keys add  \
    --attestor="${ATTESTOR_ID}"  \
    --keyversion-project="${PROJECT_ID}"  \
    --keyversion-location="${KEY_LOCATION}" \
    --keyversion-keyring="${KEYRING}" \
    --keyversion-key="${KEY_NAME}" \
    --keyversion="${KEY_VERSION}"

If you print the list of authorities again, you should now see a key registered:

gcloud container binauthz attestors list

Creating a Signed Attestation

At this point you have the features configured that enable you to sign images. Use the Attestor you created previously to sign the Container Image you've been working with

CONTAINER_PATH=us-central1-docker.pkg.dev/${PROJECT_ID}/artifact-scanning-repo/sample-image

DIGEST=$(gcloud container images describe ${CONTAINER_PATH}:latest \
    --format='get(image_summary.digest)')

Now, you can use gcloud to create your attestation. The command simply takes in the details of the key you want to use for signing, and the specific container image you want to approve

gcloud beta container binauthz attestations sign-and-create  \
    --artifact-url="${CONTAINER_PATH}@${DIGEST}" \
    --attestor="${ATTESTOR_ID}" \
    --attestor-project="${PROJECT_ID}" \
    --keyversion-project="${PROJECT_ID}" \
    --keyversion-location="${KEY_LOCATION}" \
    --keyversion-keyring="${KEYRING}" \
    --keyversion-key="${KEY_NAME}" \
    --keyversion="${KEY_VERSION}"

In Container Analysis terms, this will create a new occurrence, and attach it to your attestor's note. To ensure everything worked as expected, you can list your attestations

gcloud container binauthz attestations list \
   --attestor=$ATTESTOR_ID --attestor-project=${PROJECT_ID}

7. Signing with Cloud Build

You've enabled Image signing and manually used the Attestor to sign your sample image. In practice you will want to apply Attestations during automated processes such as CI/CD pipelines.

In this section you will configure Cloud Build to Attest images automatically

Roles

Add Binary Authorization Attestor Viewer role to Cloud Build Service Account:

gcloud projects add-iam-policy-binding ${PROJECT_ID} \
  --member serviceAccount:${PROJECT_NUMBER}@cloudbuild.gserviceaccount.com \
  --role roles/binaryauthorization.attestorsViewer

Add Cloud KMS CryptoKey Signer/Verifier role to Cloud Build Service Account (KMS-based Signing):

gcloud projects add-iam-policy-binding ${PROJECT_ID} \
  --member serviceAccount:${PROJECT_NUMBER}@cloudbuild.gserviceaccount.com \
  --role roles/cloudkms.signerVerifier

Add Container Analysis Notes Attacher role to Cloud Build Service Account:

gcloud projects add-iam-policy-binding ${PROJECT_ID} \
  --member serviceAccount:${PROJECT_NUMBER}@cloudbuild.gserviceaccount.com \
  --role roles/containeranalysis.notes.attacher

Prepare the Custom Build Cloud Build Step

You'll be using a Custom Build step in Cloud Build to simplify the attestation process. Google provides this Custom Build step which contains helper functions to streamline the process. Before use, the code for the custom build step must be built into a container and pushed to Cloud Build. To do this, run the following commands:

git clone https://github.com/GoogleCloudPlatform/cloud-builders-community.git
cd cloud-builders-community/binauthz-attestation
gcloud builds submit . --config cloudbuild.yaml
cd ../..
rm -rf cloud-builders-community

Add a signing step to your cloudbuild.yaml

In this step you will add the attestation step into your Cloud Build pipeline you built earlier.

  1. Review the new step you will be adding.

Review only. Do Not Copy

#Sign the image only if the previous severity check passes
- id: 'create-attestation'
  name: 'gcr.io/${PROJECT_ID}/binauthz-attestation:latest'
  args:
    - '--artifact-url'
    - 'us-central1-docker.pkg.dev/${PROJECT_ID}/artifact-scanning-repo/sample-image'
    - '--attestor'
    - 'projects/${PROJECT_ID}/attestors/$ATTESTOR_ID'
    - '--keyversion'
    - 'projects/${PROJECT_ID}/locations/$KEY_LOCATION/keyRings/$KEYRING/cryptoKeys/$KEY_NAME/cryptoKeyVersions/$KEY_VERSION'
  1. Overwrite your cloudbuild.yaml file with the updated complete pipeline.
cat > ./cloudbuild.yaml << EOF
steps:

# build
- id: "build"
  name: 'gcr.io/cloud-builders/docker'
  args: ['build', '-t', 'us-central1-docker.pkg.dev/${PROJECT_ID}/artifact-scanning-repo/sample-image', '.']
  waitFor: ['-']

#Run a vulnerability scan at _SECURITY level
- id: scan
  name: 'gcr.io/cloud-builders/gcloud'
  entrypoint: 'bash'
  args:
  - '-c'
  - |
    (gcloud artifacts docker images scan \
    us-central1-docker.pkg.dev/${PROJECT_ID}/artifact-scanning-repo/sample-image \
    --location us \
    --format="value(response.scan)") > /workspace/scan_id.txt

#Analyze the result of the scan
- id: severity check
  name: 'gcr.io/cloud-builders/gcloud'
  entrypoint: 'bash'
  args:
  - '-c'
  - |
      gcloud artifacts docker images list-vulnerabilities \$(cat /workspace/scan_id.txt) \
      --format="value(vulnerability.effectiveSeverity)" | if grep -Fxq CRITICAL; \
      then echo "Failed vulnerability check for CRITICAL level" && exit 1; else echo "No CRITICAL vulnerability found, congrats !" && exit 0; fi

#Retag
- id: "retag"
  name: 'gcr.io/cloud-builders/docker'
  args: ['tag',  'us-central1-docker.pkg.dev/${PROJECT_ID}/artifact-scanning-repo/sample-image', 'us-central1-docker.pkg.dev/${PROJECT_ID}/artifact-scanning-repo/sample-image:good']


#pushing to artifact registry
- id: "push"
  name: 'gcr.io/cloud-builders/docker'
  args: ['push',  'us-central1-docker.pkg.dev/${PROJECT_ID}/artifact-scanning-repo/sample-image:good']


#Sign the image only if the previous severity check passes
- id: 'create-attestation'
  name: 'gcr.io/${PROJECT_ID}/binauthz-attestation:latest'
  args:
    - '--artifact-url'
    - 'us-central1-docker.pkg.dev/${PROJECT_ID}/artifact-scanning-repo/sample-image:good'
    - '--attestor'
    - 'projects/${PROJECT_ID}/attestors/$ATTESTOR_ID'
    - '--keyversion'
    - 'projects/${PROJECT_ID}/locations/$KEY_LOCATION/keyRings/$KEYRING/cryptoKeys/$KEY_NAME/cryptoKeyVersions/$KEY_VERSION'



images:
  - us-central1-docker.pkg.dev/${PROJECT_ID}/artifact-scanning-repo/sample-image:good
EOF

Run the Build

gcloud builds submit

Review the build in Cloud Build History

Open the Cloud Console to the Cloud Build History page and review that latest build and the successful execution of the build steps.

8. Admission Control Policies

Binary Authorization is a feature in GKE and Cloud Run that provides the ability to validate rules before a container image is allowed to run. The validation executes on any request to run an image be it from a trusted CI/CD pipeline or a user manually trying to deploy an image. This capability allows you to secure your runtime environments more effectively than CI/CD pipeline checks alone.

To understand this capability you will modify the default GKE policy to enforce a strict authorization rule.

Create the GKE Cluster

Create the GKE cluster:

gcloud beta container clusters create binauthz \
    --zone us-central1-a  \
    --binauthz-evaluation-mode=PROJECT_SINGLETON_POLICY_ENFORCE

Allow Cloud Build to deploy to this cluster:

gcloud projects add-iam-policy-binding ${PROJECT_ID} \
        --member="serviceAccount:${PROJECT_NUMBER}@cloudbuild.gserviceaccount.com" \
        --role="roles/container.developer"

Allow All Policy

First verify the default policy state and your ability to deploy any image

  1. Review existing policy
gcloud container binauthz policy export
  1. Notice that the enforcement policy is set to ALWAYS_ALLOW

evaluationMode: ALWAYS_ALLOW

  1. Deploy Sample to verify you can deploy anything
kubectl run hello-server --image gcr.io/google-samples/hello-app:1.0 --port 8080
  1. Verify the deploy worked
kubectl get pods

You will see the following output

161db370d99ffb13.png

  1. Delete deployment
kubectl delete pod hello-server

Deny All Policy

Now update the policy to disallow all images.

  1. Export the current policy to an editable file
gcloud container binauthz policy export  > policy.yaml
  1. Change the policy

In a text editor, change the evaluationMode from ALWAYS_ALLOW to ALWAYS_DENY.

edit policy.yaml

The policy YAML file should appear as follows:

globalPolicyEvaluationMode: ENABLE
defaultAdmissionRule:
  evaluationMode: ALWAYS_DENY
  enforcementMode: ENFORCED_BLOCK_AND_AUDIT_LOG
name: projects/PROJECT_ID/policy
  1. Open Terminal and apply the new policy and wait a few seconds for the change to propagate
gcloud container binauthz policy import policy.yaml
  1. Attempt sample workload deployment
kubectl run hello-server --image gcr.io/google-samples/hello-app:1.0 --port 8080
  1. Deployment fails with the following message
Error from server (VIOLATES_POLICY): admission webhook "imagepolicywebhook.image-policy.k8s.io" denied the request: Image gcr.io/google-samples/hello-app:1.0 denied by Binary Authorization default admission rule. Denied by always_deny admission rule

Revert the policy to allow all

Before moving on to the next section be sure to revert the policy changes

  1. Change the policy

In a text editor, change the evaluationMode from ALWAYS_DENY to ALWAYS_ALLOW.

edit policy.yaml

The policy YAML file should appear as follows:

globalPolicyEvaluationMode: ENABLE
defaultAdmissionRule:
  evaluationMode: ALWAYS_ALLOW
  enforcementMode: ENFORCED_BLOCK_AND_AUDIT_LOG
name: projects/PROJECT_ID/policy
  1. Apply the reverted policy
gcloud container binauthz policy import policy.yaml

9. Block Vulnerabilities in GKE

In this section you will combine what you've learned so far by implementing a CI/CD pipeline with Cloud Build that scans the images, then checks for vulnerabilities before signing the image and attempting to deploy. GKE will use Binary Authorization to validate the image has a signature from the Vulnerability scanning before allowing the image to run.

d5c41bb89e22fd61.png

Update GKE Policy to Require Attestation

Require images are signed by your Attestor by adding clusterAdmissionRules to your GKE BinAuth Policy

Overwrite the policy with the updated config using the command below.

COMPUTE_ZONE=us-central1-a

cat > binauth_policy.yaml << EOM
defaultAdmissionRule:
  enforcementMode: ENFORCED_BLOCK_AND_AUDIT_LOG
  evaluationMode: ALWAYS_DENY
globalPolicyEvaluationMode: ENABLE
clusterAdmissionRules:
  ${COMPUTE_ZONE}.binauthz:
    evaluationMode: REQUIRE_ATTESTATION
    enforcementMode: ENFORCED_BLOCK_AND_AUDIT_LOG
    requireAttestationsBy:
    - projects/${PROJECT_ID}/attestors/vulnz-attestor
EOM

Apply the policy

gcloud beta container binauthz policy import binauth_policy.yaml

Attempt to deploy the unsigned image

Create a deployment descriptor for the application you built earlier using the following command. The image used here is the image you built earlier that contains critical vulnerabilities and does NOT contain the signed attestation.

GKE admission controllers need to know the exact image to be deployed in order to consistently validate the signature. To accomplish this you'll need to use the image digest rather and a simple tag.

Get the image digest for the bad image

CONTAINER_PATH=us-central1-docker.pkg.dev/${PROJECT_ID}/artifact-scanning-repo/sample-image


DIGEST=$(gcloud container images describe ${CONTAINER_PATH}:bad \
    --format='get(image_summary.digest)')

Use the digest in the Kubernetes configuration

cat > deploy.yaml << EOM
apiVersion: v1
kind: Service
metadata:
  name: deb-httpd
spec:
  selector:
    app: deb-httpd
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deb-httpd
spec:
  replicas: 1
  selector:
    matchLabels:
      app: deb-httpd
  template:
    metadata:
      labels:
        app: deb-httpd
    spec:
      containers:
      - name: deb-httpd
        image: ${CONTAINER_PATH}@${DIGEST}
        ports:
        - containerPort: 8080
        env:
          - name: PORT
            value: "8080"

EOM

Attempt to deploy the app to GKE

kubectl apply -f deploy.yaml

Review the workload in the console and note the error stating the deployment was denied:

No attestations found that were valid and signed by a key trusted by the attestor

Deploy a signed image

Get the image digest for the bad image

CONTAINER_PATH=us-central1-docker.pkg.dev/${PROJECT_ID}/artifact-scanning-repo/sample-image


DIGEST=$(gcloud container images describe ${CONTAINER_PATH}:good \
    --format='get(image_summary.digest)')

Use the digest in the Kubernetes configuration

cat > deploy.yaml << EOM
apiVersion: v1
kind: Service
metadata:
  name: deb-httpd
spec:
  selector:
    app: deb-httpd
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deb-httpd
spec:
  replicas: 1
  selector:
    matchLabels:
      app: deb-httpd
  template:
    metadata:
      labels:
        app: deb-httpd
    spec:
      containers:
      - name: deb-httpd
        image: ${CONTAINER_PATH}@${DIGEST}
        ports:
        - containerPort: 8080
        env:
          - name: PORT
            value: "8080"

EOM

Deploy the app to GKE

kubectl apply -f deploy.yaml

Review the workload in the console and note the successful deployment of the image.

10. Congratulations!

Congratulations, you finished the codelab!

What we've covered:

  • How to enable automatic scanning
  • How to perform On-Demand Scanning
  • How to integrate scanning in a build pipeline
  • How to sign approved images
  • How to use GKE Admission controllers to block images
  • How to configure GKE to allow only signed approved images

What's next:

Clean up

To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources.

Deleting the project

The easiest way to eliminate billing is to delete the project that you created for the tutorial.