Cloud Run allows you to run stateless containers in a fully managed environment. It is built from open-source Knative, letting you choose to run your containers either fully managed with Cloud Run, or in your Google Kubernetes Engine cluster with Cloud Run for Anthos.

Events for Cloud Run makes it easy to connect Cloud Run services with events from a variety of sources. It allows you to build event-driven architectures in which microservices are loosely coupled and distributed. It also takes care of event ingestion, delivery, security, authorization, and error-handling for you which improves developer agility and application resilience.

In this codelab, you will learn about Events for Cloud Run. More specifically, you will listen to events from Cloud Pub/Sub, Audit Logs, Cloud Storage, Cloud Scheduler and how to produce/consume custom events.

What you'll learn

As we adopt serverless architecture, events become an integral part of how de-coupled microservices communicate. Events for Cloud Run for Anthos makes events a first-class citizen of the Cloud Run for Anthos offering, so that it is easy to build event-driven serverless applications.

Events for Cloud Run for Anthos enables reliable, secure and scalable asynchronous event delivery from packaged or app-created event sources to on-cluster and off-cluster consumers.

Google Cloud sources

Event sources that are Google Cloud owned products

Google sources

Event sources that are Google-owned products such as Gmail, Hangouts, Android Management and more

Custom sources

Event sources that are not Google-owned products and are created by end-users themselves. These could be user-developed Knative Sources or any other app running on the cluster that can produce a Cloud Event.

3rd party sources

Event sources that are neither Google-owned nor end-user owned. This includes popular event sources such as Github, SAP, Datadog, Pagerduty, etc that are owned and maintained by 3rd party providers, partners, or OSS communities.

Events are normalized to CloudEvents v1.0 format for cross-service interoperability. CloudEvents is a vendor-neutral open spec describing event data in common formats, enabling interoperability across services, platforms and systems.

Events for Cloud Run is conformant with Knative Eventing and allows portability of containers to and from other Knative-based implementations. This provides a consistent, cloud-agnostic framework for declaratively wiring event producers with event consumers.

This preview is the first version which delivers an initial set of the long-term functionality.

To enable users build event-driven serverless applications, our initial focus is two folds:

  1. Provide a wide ecosystem of Google Cloud Sources that enables Cloud Run services on the Anthos cluster to react to events from Google Cloud services.
  1. Enable end-user applications and services to emit custom events by publishing to a namespace-scoped cluster-local Broker url.

The underlying delivery mechanism uses Cloud Pub/Sub topics and subscriptions that are visible in customers' projects. Hence the feature provides the same delivery guarantees as Cloud Pub/Sub.

Event Trigger provides a way to subscribe to events so that events matching the trigger filter are delivered to the destination (or sink) that the Trigger points to.

All events are delivered in the Cloud Events v1.0 format for cross service interoperability.

We will keep delivering more value in an iterative manner all the way to GA and beyond.

Self-paced environment setup

  1. Sign in to Cloud Console and create a new project or reuse an existing one. (If you don't already have a Gmail or G Suite account, you must create one.)

Remember the project ID, a unique name across all Google Cloud projects (the name above has already been taken and will not work for you, sorry!). It will be referred to later in this codelab as PROJECT_ID.

  1. Next, you'll need to enable billing in Cloud Console in order to use Google Cloud resources.

Running through this codelab shouldn't cost you more than a few dollars, but it could be more if you decide to use more resources or if you leave them running.

New users of Google Cloud are eligible for a $300 free trial.

Start Cloud Shell

While Google Cloud can be operated remotely from your laptop, in this codelab you will be using Google Cloud Shell, a command line environment running in the Cloud.

From the GCP Console click the Cloud Shell icon on the top right toolbar:

It should only take a few moments to provision and connect to the environment. When it is finished, you should see something like this:

This virtual machine is loaded with all the development tools you'll need. It offers a persistent 5GB home directory, and runs on Google Cloud, greatly enhancing network performance and authentication. All of your work in this lab can be done with simply a browser.

Setup project id and install alpha components

Inside Cloud Shell, GOOGLE_CLOUD_PROJECT should already be set to your project id. If not, make sure it is set and your gcloud is configured with that project id:

export GOOGLE_CLOUD_PROJECT=your-project-id
gcloud config set project ${GOOGLE_CLOUD_PROJECT}

Make sure gcloud component for alpha in installed:

gcloud components install alpha

Enable APIs

Enable all necessary services:

gcloud services enable cloudapis.googleapis.com 
gcloud services enable container.googleapis.com 
gcloud services enable containerregistry.googleapis.com
gcloud services enable cloudbuild.googleapis.com

Set zone and platform

Before creating a GKE cluster with Cloud Run Events, set the cluster name, zone and platform. As an example, here we set the name and zone to events-cluster and europe-west1-b and platform is gke,

In Cloud Shell:

export CLUSTER_NAME=events-cluster
export CLUSTER_ZONE=europe-west1-b

gcloud config set run/cluster ${CLUSTER_NAME}
gcloud config set run/cluster_location ${CLUSTER_ZONE}
gcloud config set run/platform gke

You can check that the configuration is set:

gcloud config list

...
[run]
cluster = events-cluster
cluster_location = europe-west1-b
platform = gke

Configure gcloud to access the v1alpha1 API

Before attempting to create a GKE cluster, make sure gcloud can access the v1alpha1 API by following these steps.

Create a service account:

export SVC_ACCT=alpha-svc-acct
gcloud iam service-accounts create ${SVC_ACCT} --display-name=${SVC_ACCT}

Grant admin role:

gcloud projects add-iam-policy-binding ${GOOGLE_CLOUD_PROJECT} \
--member=serviceAccount:${SVC_ACCT}@${GOOGLE_CLOUD_PROJECT}.iam.gserviceaccount.com \
--role=roles/container.admin

Grant service account actor role:

gcloud projects add-iam-policy-binding ${GOOGLE_CLOUD_PROJECT} \
--member=serviceAccount:${SVC_ACCT}@${GOOGLE_CLOUD_PROJECT}.iam.gserviceaccount.com \
--role=roles/iam.serviceAccountActor

Create private key for service account:

gcloud iam service-accounts keys create ./${SVC_ACCT}_key.json \
--iam-account=${SVC_ACCT}@${GOOGLE_CLOUD_PROJECT}.iam.gserviceaccount.com

Activate SVC_ACCT service account (configure gcloud to send requests using service account's identity):

gcloud auth activate-service-account \
${SVC_ACCT}@${GOOGLE_CLOUD_PROJECT}.iam.gserviceaccount.com \
--key-file=./${SVC_ACCT}_key.json

Create GKE cluster

Create a GKE cluster running Kubernetes >= 1.15.9-gke.26, with the following addons enabled: CloudRun, HttpLoadBalancing, HorizontalPodAutoscaling:

gcloud beta container clusters create ${CLUSTER_NAME} \
--addons=HttpLoadBalancing,HorizontalPodAutoscaling,CloudRun \
--machine-type=n1-standard-4 \
--enable-autoscaling --min-nodes=3 --max-nodes=10 \
--no-issue-client-certificate --num-nodes=3 --image-type=cos \
--enable-stackdriver-kubernetes \
--scopes=cloud-platform,logging-write,monitoring-write,pubsub \
--zone ${CLUSTER_ZONE} \
--cluster-version=latest \
--enable-cloud-run-alpha

After the cluster is created, switch back your user or service account for gcloud by calling gcloud auth login or gcloud auth activate-service-account.

Cloud Run Events have a Control Plane and a Data Plane that need to be set up separately. To set up the Control Plane:

In Cloud Shell:

gcloud alpha events init 

At this point the control plane should be properly initialized. You should see four pods with a

Running status, 2 (controller-xxx-xxx and webhook-xxx-xxx) in the cloud-run-events namespace and 2 (eventing-controller-xxx-xxx and eventing-webhook-xxx-xxx) in the knative-eventing namespace. You can check by executing the following commands:

kubectl get pods -n cloud-run-events -l app=cloud-run-events

NAME                         READY   STATUS    RESTARTS   AGE
controller-9cc679b67-2952n   1/1     Running   0          22s
webhook-8576c4cfcb-dhz82     1/1     Running   0          16m
kubectl get pods -n knative-eventing

NAME                                   READY   STATUS    RESTARTS   AGE
eventing-controller-77f46f6cf8-kj9ck   1/1     Running   0          17m
eventing-webhook-5bc787965f-hcmwg      1/1     Running   0          17m

Next is to set up the data plane in the user namespaces. This creates a Broker with appropriate permissions to read/write from/to Pub/Sub.

Inside Cloud Shell, set a NAMESPACE environment variable for the namespace you want to use for your objects. You can set it to default if you want to use the default namespace as shown below:

export NAMESPACE=default

Note that if the namespace specified does not exist, you need to create it:

kubectl create namespace ${NAMESPACE}

You can use the default service account for the data plane. You will be prompted to create a new key for the service account, which is needed for this command. Click on Yes.

gcloud alpha events brokers create default  \
--namespace $NAMESPACE  

Note: the default service account used for the data plane when running the gcloud command above will be "cloud-run-events@$PROJECT.iam.gserviceaccount.com".

You should be able to see a default Broker with url http://default-broker.default.svc.cluster.local

up and running (Ready=True) in the namespace you specified. Also note that it may take a few seconds until the Broker becomes ready.

kubectl get brokers -n ${NAMESPACE}

NAME      READY   REASON   URL                                               AGE
default   True             http://default-broker.default.svc.cluster.local   24s

You can discover what the registered sources are, the types of events they can emit, and how to configure triggers in order to consume them.

To see the list of different types of events:

gcloud alpha events types list

TYPE                                            SOURCE                DESCRIPTION
com.google.cloud.auditlog.event                 CloudAuditLogsSource  Common audit log event type for all Google Cloud Platform API operations.
com.google.cloud.pubsub.topic.publish           CloudPubSubSource     This event is sent when a message is published to a Cloud Pub/Sub topic.
com.google.cloud.scheduler.job.execute          CloudSchedulerSource  This event is sent when a job is executed in Cloud Scheduler.
com.google.cloud.storage.object.archive         CloudStorageSource    Only sent when a bucket has enabled object versioning. This event indicates that the live version of an object has become an archived version, either because it was archived or because it was overwritten by the
                                                                      upload of an object of the same name.
com.google.cloud.storage.object.delete          CloudStorageSource    Sent when an object has been permanently deleted. This includes objects that are overwritten or are deleted as part of the bucket's lifecycle configuration. For buckets with object versioning enabled, this is not
                                                                      sent when an object is archived.
com.google.cloud.storage.object.finalize        CloudStorageSource    Sent when a new object (or a new generation of an existing object) is successfully created in the bucket. This includes copying or rewriting an existing object. A failed upload does not trigger this event.
com.google.cloud.storage.object.metadataUpdate  CloudStorageSource    Sent when the metadata of an existing object changes.

To get more information about each event type:

gcloud alpha events types describe com.google.cloud.pubsub.topic.publish

As an event sink, deploy a Cloud Run service that logs the contents of the CloudEvent it receives.

Clone code and build container image

Clone a repository with samples in different languages (Node.js, Go, Java, Python, C#):

git clone https://github.com/gcpevents/eventsforcloudrun.git

You can build and deploy any samples you prefer. As an example, take a look at the Node.js sample in eventsforcloudrun/node folder. It has 3 files:

Build your container image using Cloud Build. Run the following from the directory containing the Dockerfile:

gcloud builds submit --tag gcr.io/$(gcloud config get-value project)/helloworld

Cloud Build builds the container image and pushes it to Cloud Registry, all in one command. Once pushed to the registry, you can check that the image is in the registry by listing all the container images associated with your project using this command:

gcloud container images list

Deploy to Cloud Run

Deploy your containerized application to Cloud Run:

export SERVICE_NAME=helloworld-events
gcloud run deploy ${SERVICE_NAME} \
  --namespace=${NAMESPACE} \
  --image gcr.io/$(gcloud config get-value project)/helloworld

On success, the command line displays the service URL, for instance:

Service [helloworld-events] revision [helloworld-events-00001] has been deployed and is serving traffic at [SERVICE URL]

You can now visit your deployed container by opening the service URL in any browser window.

One way of receiving events is through Cloud Pub/Sub. Custom applications can publish messages to Cloud Pub/Sub and these messages can be delivered to Google Cloud Run sinks via Events for Cloud Run.

Create a topic

First, create a Cloud Pub/Sub topic. You can replace TOPIC_ID with a unique topic name you prefer:

export TOPIC_ID=cr-gke-topic
gcloud pubsub topics create ${TOPIC_ID}

Create a trigger

Before creating the trigger, get more details on the parameters you'll need to construct a trigger for events from Cloud Pub/Sub:

gcloud alpha events types describe com.google.cloud.pubsub.topic.publish

Create a trigger to filter events published to the Cloud Pub/Sub topic to our deployed Cloud Run service:

gcloud alpha events triggers create trigger-pubsub \
  --namespace ${NAMESPACE} \
  --source CloudPubSubSource \
  --target-service ${SERVICE_NAME} \
  --type com.google.cloud.pubsub.topic.publish \
  --parameters topic=${TOPIC_ID}

Test the trigger

You can check that the trigger is created by listing all triggers:

gcloud alpha events triggers list

You might need to wait for up to 10 minutes for the trigger creation to be propagated and for it to begin filtering events.

In order to simulate a custom application sending message, you can use gcloud to to fire an event:

gcloud pubsub topics publish ${TOPIC_ID} --message="Hello there"

The Cloud Run sink we created logs the body of the incoming message. You can view this in the Logs section of your Cloud Run instance:

Note that "Hello there" will be base64 encoded as it was sent by Pub/Sub and you will have to decode it if you want to see the original message sent.

Delete the trigger

Optionally, you can delete the trigger once done testing.

gcloud alpha events triggers delete trigger-pubsub --namespace ${NAMESPACE}

You will set up a trigger to listen for events from Audit Logs. More specifically, you will look for Pub/Sub topic creation events in Audit Logs.

Enable Audit Logs

In order to receive events from a service, you need to enable audit logs. From the Cloud Console, select IAM & Admin > Audit Logs from the upper left-hand menu. In the list of services, check Google Cloud Pub/Sub:

On the right hand side, make sure Admin, Read and Write are selected. Click save:

Test Audit Logs

To learn how to identify the parameters you'll need to set up an actual trigger, perform an actual operation.

For example, create a Pub/Sub topic:

gcloud pubsub topics create cre-gke-topic1

Now, let's see what kind of audit log this update generated. From the Cloud Console, select Logging > Logs Viewer from the upper left-hand menu.

Under Query Builder, choose Cloud Pub/Sub Topic and Click Add:

Once you run the query, you'll see logs for Pub/Sub topics and one of those should be google.pubsub.v1.Publisher.CreateTopic:

Note the serviceName, methodName and resourceName. We'll use these in creating the trigger.

Create a trigger

You are now ready to create an event trigger for Audit Logs.

You can get more details on the parameters you'll need to construct a trigger for events from Google Cloud sources by running the following command:

gcloud alpha events types describe com.google.cloud.auditlog.event

Create the trigger with the right filters:

gcloud alpha events triggers create trigger-auditlog \
--namespace ${NAMESPACE} \
--target-service ${SERVICE_NAME} \
--type=com.google.cloud.auditlog.event \
--parameters serviceName=pubsub.googleapis.com \
--parameters methodName=google.pubsub.v1.Publisher.CreateTopic

Test the trigger

List all triggers to confirm that trigger was successfully created:

gcloud alpha events triggers list

Wait for up to 10 minutes for the trigger creation to be propagated and for it to begin filtering events. Once ready, it will filter create events and send them to the service. You're now ready to fire an event.

Create another Pub/Sub topic, as you did earlier:

gcloud pubsub topics create cre-gke-topic2

If you check the logs of the Cloud Run service in Cloud Console, you should see the received event:

Delete the trigger and topics

Optionally, you can delete the trigger once done testing:

gcloud alpha events triggers delete trigger-auditlog

Also delete the topics:

gcloud pubsub topics delete cre-gke-topic1 cre-gke-topic2

You will set up a trigger to listen for events from Cloud Storage.

Create a bucket

First, create a Cloud Storage bucket in the same region as the deployed Cloud Run service. You can replace BUCKET_NAME with a unique name you prefer:

export BUCKET_NAME=[new bucket name]
export REGION=europe-west1

gsutil mb -p $(gcloud config get-value project) \
  -l $REGION \
  gs://$BUCKET_NAME/

Setup Cloud Storage permissions

Before creating a trigger, you need to give the default service account for Cloud Storage permission to publish to Pub/Sub.

First, you need to find the Service Account that Cloud Storage uses to publish to Pub/Sub. You can use the steps outlined in Cloud Console or the JSON API. Assume the service account you found from above was service-XYZ@gs-project-accounts.iam.gserviceaccount.com, set this to an environment variable:

export GCS_SERVICE_ACCOUNT=service-XYZ@gs-project-accounts.iam.gserviceaccount.com

Then, grant rights to that Service Account to publish to Pub/Sub:

gcloud projects add-iam-policy-binding ${GOOGLE_CLOUD_PROJECT} \
--member=serviceAccount:${GCS_SERVICE_ACCOUNT} \
--role roles/pubsub.publisher

Create a trigger

You are now ready to create an event trigger for Cloud Storage events.

You can get more details on the parameters you'll need to construct the trigger:

gcloud alpha events types describe com.google.cloud.storage.object.finalize

Create the trigger with the right filters:

gcloud alpha events triggers create trigger-storage \
--namespace ${NAMESPACE} \
--target-service ${SERVICE_NAME} \
--type=com.google.cloud.storage.object.finalize \
--parameters bucket=${BUCKET_NAME}

Test the trigger

List all triggers to confirm that trigger was successfully created:

gcloud alpha events triggers list

Wait for up to 10 minutes for the trigger creation to be propagated and for it to begin filtering events. Once ready, it will filter create events and send them to the service.

You're now ready to fire an event.

Upload a random file to the Cloud Storage bucket:

echo "Hello World" > random.txt
gsutil cp random.txt gs://${BUCKET_NAME}/random.txt

If you check the logs of the Cloud Run service in Cloud Console, you should see the received event:

Delete the trigger

Optionally, you can delete the trigger once done testing:

gcloud alpha events triggers delete trigger-storage

You will set up a trigger to listen for events from Cloud Scheduler.

Create an App Engine application

Cloud Scheduler currently needs users to create an App Engine application. Pick an App Engine Location and create the app:

export APP_ENGINE_LOCATION=europe-west
gcloud app create --region=${APP_ENGINE_LOCATION}

Create a trigger

You can get more details on the parameters you'll need to construct a trigger for events from Google Cloud sources by running the following command:

gcloud alpha events types describe com.google.cloud.scheduler.job.execute

Pick a Cloud Scheduler location to create the scheduler:

export SCHEDULER_LOCATION=europe-west1

Create a Trigger that will create a job to be executed every minute in Google Cloud Scheduler and call the target service:

gcloud alpha events triggers create trigger-scheduler \
--namespace ${NAMESPACE} \
--target-service=${SERVICE_NAME} \
--type=com.google.cloud.scheduler.job.execute \
--parameters location=${SCHEDULER_LOCATION} \
--parameters schedule="* * * * *" \
--parameters data="trigger-scheduler-data"

Test the trigger

List all triggers to confirm that trigger was successfully created:

gcloud alpha events triggers list

Wait for up to 10 minutes for the trigger creation to be propagated and for it to begin filtering events. Once ready, it will filter create events and send them to the service.

If you check the logs of the Cloud Run service in Cloud Console, you should see the received event.

Delete the trigger

Optionally, you can delete the trigger once done testing:

gcloud alpha events triggers delete trigger-scheduler

In this part of the codelab, you will produce and consume custom events using the Broker.

Create Curl Pod to Produce Events

Events are usually created programmatically. However, in this step, you will use curl to manually send individual events and see how these events are received by the correct consumer.

To create a Pod that acts as event producer, run the following command:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: curl
  name: curl
  namespace: $NAMESPACE
spec:
  containers:
  - image: radial/busyboxplus:curl
    imagePullPolicy: IfNotPresent
    name: curl
    resources: {}
    stdin: true
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    tty: true
EOF

Verify that the curl Pod is working correctly. You should see a pod called curl with Status=Running:

kubectl get pod curl -n ${NAMESPACE}

Create a trigger

You will create a Trigger with a filter on the particular CloudEvents type (in this case alpha-type) you will emit. Note that exact match filtering on any number of CloudEvents attributes as well as extensions are supported. If your filter sets multiple attributes, an event must have all of the attributes for the Trigger to filter it. Conversely, if you don't specify a filter, all events will be received in your Service.

Create the trigger:

gcloud alpha events triggers create trigger-custom \
--namespace ${NAMESPACE} \
--target-service ${SERVICE_NAME} \
--type=alpha-type \
--custom-type

Test the trigger

List all triggers to confirm that trigger was successfully created:

gcloud alpha events triggers list

Create an event by sending an HTTP request to the Broker. Remind yourself the Broker URL by running the following:

kubectl get brokers -n ${NAMESPACE}

NAME      READY   REASON   URL
default   True             http://default-broker.<NAMESPACE>.svc.cluster.local

SSH into the curl pod you created earlier:

kubectl --namespace ${NAMESPACE} attach curl -it

You have SSHed into the pod, and can now make a HTTP request. A prompt similar to the one below will appear:

Defaulting container name to curl.
Use 'kubectl describe pod/curl -n default' to see all of the containers in this pod.
If you don't see a command prompt, try pressing enter.
[ root@curl:/ ]$

Create an event:

curl -v "http://default-broker.<NAMESPACE>.svc.cluster.local" \
-X POST \
-H "Ce-Id: my-id" \
-H "Ce-Specversion: 1.0" \
-H "Ce-Type: alpha-type" \
-H "Ce-Source: my-source" \
-H "Content-Type: application/json" \
-d '{"msg":"send-cloudevents-to-broker"}'

If the event has been received, you will receive an HTTP 202 Accepted response. Exit the SSH session with Ctrl + D

Verify that the published event was sent by looking at the logs of the Cloud Run Service:

kubectl logs --selector serving.knative.dev/service=$SERVICE_NAME \
 -c user-container -n $NAMESPACE --tail=100

You should see log lines similar to:

Event received!
HEADERS:
{"host":"helloworld-events.default.svc.cluster.local","user-agent":"Go-http-client/1.1","content-length":"36","accept-encoding":"gzip","ce-id":"my-id","ce-knativearrivaltime":"2020-05-04T15:02:19.513025661Z","ce-source":"my-source","ce-specversion":"1.0","ce-time":"2020-05-04T15:02:19.513105285Z","ce-traceparent":"00-5ac0b8691135662893f51600ed926980-8ef7d9d8e4f52ab5-00","ce-type":"alpha-type","content-type":"application/json","forwarded":"for=10.40.1.6;proto=http, for=10.40.0.9","k-proxy-request":"activator","x-b3-parentspanid":"60574e7a9c018c97","x-b3-sampled":"0","x-b3-spanid":"ad5d0cd62b127b31","x-b3-traceid":"5ac0b8691135662893f51600ed926980","x-envoy-decorator-operation":"helloworld-events-00001-pow.default.svc.cluster.local:80/*","x-envoy-expected-rq-timeout-ms":"900000","x-envoy-internal":"true","x-forwarded-for":"10.40.1.6, 10.40.0.9, 10.40.2.4","x-forwarded-proto":"http","x-request-id":"78e808fb-4ce3-90f9-a606-fe008f39bb50"}
BODY:
{"msg":"send-cloudevents-to-broker"}

Delete the trigger

Optionally, you can delete the trigger once done testing:

gcloud alpha events triggers delete trigger-custom

Congratulations for completing the codelab.

What we've covered