1. Introduction
Cloud Run allows you to run stateless containers in a fully managed environment. It is built from open-source Knative, letting you choose to run your containers either fully managed with Cloud Run, or in your Google Kubernetes Engine cluster with Cloud Run for Anthos.
Events for Cloud Run for Anthos makes it easy to connect Cloud Run services with events from a variety of sources. It allows you to build event-driven architectures in which microservices are loosely coupled and distributed. It also takes care of event ingestion, delivery, security, authorization, and error-handling for you which improves developer agility and application resilience.
In this codelab, you will learn about Events for Cloud Run for Anthos. More specifically, you will listen to events from Cloud Pub/Sub, Audit Logs, Cloud Storage, Cloud Scheduler and how to produce/consume custom events.
What you'll learn
- Long term vision of Events for Cloud Run for Anthos
- Current state of Events for Cloud Run for Anthos
- Create a Cloud Run sink
- Create an Event trigger for Cloud Pub/Sub
- Create an Event trigger for Audit Logs
- Create an Event trigger for Cloud Storage
- Create an Event trigger for Cloud Scheduler
- Produce and consume custom events
2. Long Term Vision
As we adopt serverless architecture, events become an integral part of how de-coupled microservices communicate. Events for Cloud Run for Anthos makes events a first-class citizen of the Cloud Run for Anthos offering, so that it is easy to build event-driven serverless applications.
Events for Cloud Run for Anthos enables reliable, secure and scalable asynchronous event delivery from packaged or app-created event sources to on-cluster and off-cluster consumers.
Google Cloud sources | Event sources that are Google Cloud owned products |
Google sources | Event sources that are Google-owned products such as Gmail, Hangouts, Android Management and more |
Custom sources | Event sources that are not Google-owned products and are created by end-users themselves. These could be user-developed Knative Sources or any other app running on the cluster that can produce a Cloud Event. |
3rd party sources | Event sources that are neither Google-owned nor end-user owned. This includes popular event sources such as Github, SAP, Datadog, Pagerduty, etc that are owned and maintained by 3rd party providers, partners, or OSS communities. |
Events are normalized to CloudEvents v1.0 format for cross-service interoperability. CloudEvents is a vendor-neutral open spec describing event data in common formats, enabling interoperability across services, platforms and systems.
Events for Cloud Run is conformant with Knative Eventing and allows portability of containers to and from other Knative-based implementations. This provides a consistent, cloud-agnostic framework for declaratively wiring event producers with event consumers.
3. Current state
This preview is the first version which delivers an initial set of the long-term functionality.
To enable users build event-driven serverless applications, our initial focus is two folds:
- Provide a wide ecosystem of Google Cloud Sources that enables Cloud Run services on the Anthos cluster to react to events from Google Cloud services.
- At the outset, events from Google Cloud Sources are delivered by way of Cloud Audit Logs (CAL), enabling a breadth of event sources. The latency and availability of event delivery from these sources are tied to those of Cloud Audit Logs. Whenever an event from a Google Cloud source is published, a corresponding Cloud Audit Log entry is created.
- Along with Cloud Audit Logs, there is first class support to consume events from Cloud Storage, Cloud Pub/Sub, and Cloud Scheduler. We will keep expanding this ecosystem of sources with more first-class sources as we learn more from user journeys and feedback.
- Enable end-user applications and services to emit custom events by publishing to a namespace-scoped cluster-local Broker url.
The underlying delivery mechanism uses Cloud Pub/Sub topics and subscriptions that are visible in customers' projects. Hence the feature provides the same delivery guarantees as Cloud Pub/Sub.
Event Trigger provides a way to subscribe to events so that events matching the trigger filter are delivered to the destination (or sink) that the Trigger points to.
All events are delivered in the Cloud Events v1.0 format for cross service interoperability.
We will keep delivering more value in an iterative manner all the way to GA and beyond.
4. Setup and Requirements
Self-paced environment setup
- Sign in to Cloud Console and create a new project or reuse an existing one. If you don't already have a Gmail or Google Workspace account, you must create one.
- The Project Name is your display name for this project. As long as you follow its naming conventions, you can use anything you want and can update it at any time.
- The Project ID must be unique across all Google Cloud projects and is immutable (cannot be changed once set). The Cloud console auto-generates a unique string; usually you don't care what it is. In most codelabs, you'll need to reference the Project ID (and it is typically identified as
PROJECT_ID
), so if you don't like it, generate another random one, or, you can try your own and see if it's available. Then it's "frozen" once the project is created.
- Next, you'll need to enable billing in Cloud Console in order to use Google Cloud resources.
Running through this codelab shouldn't cost much, if anything at all. Be sure to to follow any instructions in the "Cleaning up" section which advises you how to shut down resources so you don't incur billing beyond this tutorial. New users of Google Cloud are eligible for the $300 USD Free Trial program.
Start Cloud Shell
While Google Cloud can be operated remotely from your laptop, in this codelab you will be using Google Cloud Shell, a command line environment running in the Cloud.
From the GCP Console click the Cloud Shell icon on the top right toolbar:
It should only take a few moments to provision and connect to the environment. When it is finished, you should see something like this:
This virtual machine is loaded with all the development tools you'll need. It offers a persistent 5GB home directory, and runs on Google Cloud, greatly enhancing network performance and authentication. All of your work in this lab can be done with simply a browser.
5. Enable APIs, set zone and platform
Setup project id and install alpha components
Inside Cloud Shell, GOOGLE_CLOUD_PROJECT should already be set to your project id. If not, make sure it is set and your gcloud is configured with that project id:
export GOOGLE_CLOUD_PROJECT=your-project-id gcloud config set project ${GOOGLE_CLOUD_PROJECT}
Make sure gcloud component for alpha in installed:
gcloud components install alpha
Enable APIs
Enable all necessary services:
gcloud services enable cloudapis.googleapis.com gcloud services enable container.googleapis.com gcloud services enable containerregistry.googleapis.com gcloud services enable cloudbuild.googleapis.com
Set zone and platform
Before creating a GKE cluster with Cloud Run Events, set the cluster name, zone and platform. As an example, here we set the name and zone to events-cluster
and europe-west1-b
and platform is gke,
In Cloud Shell:
export CLUSTER_NAME=events-cluster export CLUSTER_ZONE=europe-west1-b gcloud config set run/cluster ${CLUSTER_NAME} gcloud config set run/cluster_location ${CLUSTER_ZONE} gcloud config set run/platform gke
You can check that the configuration is set:
gcloud config list ... [run] cluster = events-cluster cluster_location = europe-west1-b platform = gke
6. Create a GKE cluster with Cloud Run Events
Create a GKE cluster running Kubernetes >= 1.15.9-gke.26
, with the following addons enabled: CloudRun
, HttpLoadBalancing
, HorizontalPodAutoscaling
:
gcloud beta container clusters create ${CLUSTER_NAME} \ --addons=HttpLoadBalancing,HorizontalPodAutoscaling,CloudRun \ --machine-type=n1-standard-4 \ --enable-autoscaling --min-nodes=3 --max-nodes=10 \ --no-issue-client-certificate --num-nodes=3 --image-type=cos \ --enable-stackdriver-kubernetes \ --scopes=cloud-platform,logging-write,monitoring-write,pubsub \ --zone ${CLUSTER_ZONE} \ --release-channel=rapid
7. Setup Cloud Run Events (Control Plane)
Cloud Run Events have a Control Plane and a Data Plane that need to be set up separately. To set up the Control Plane:
In Cloud Shell:
gcloud beta events init
This will initialize eventing and also create a number of service accounts needed. Make sure you select ‘Yes' when prompted for service account creation.
At this point the control plane should be properly initialized. You should see four pods with a
Running
status, 2 (controller-xxx-xxx
and webhook-xxx-xxx
) in the cloud-run-events
namespace and 2 (eventing-controller-xxx-xxx
and eventing-webhook-xxx-xxx
) in the knative-eventing
namespace. You can check by executing the following commands:
kubectl get pods -n cloud-run-events NAME READY STATUS RESTARTS AGE controller-9cc679b67-2952n 1/1 Running 0 22s webhook-8576c4cfcb-dhz82 1/1 Running 0 16m
kubectl get pods -n knative-eventing NAME READY STATUS RESTARTS AGE eventing-controller-77f46f6cf8-kj9ck 1/1 Running 0 17m eventing-webhook-5bc787965f-hcmwg 1/1 Running 0 17m
8. Setup Cloud Run Events (Data Plane)
Next is to set up the data plane in the user namespaces. This creates a Broker with appropriate permissions to read/write from/to Pub/Sub.
Inside Cloud Shell, set a NAMESPACE
environment variable for the namespace you want to use for your objects. You can set it to default
if you want to use the default namespace as shown below:
export NAMESPACE=default
Note that if the namespace specified does not exist (i.e. namespace is not default), you need to create it:
kubectl create namespace ${NAMESPACE}
Initialize the namespace with the default secret:
gcloud beta events namespaces init ${NAMESPACE} --copy-default-secret
Create a default broker in the namespace:
gcloud beta events brokers create default --namespace ${NAMESPACE}
Check that the broker is created. Note that it may take a few seconds until the broker becomes ready:
kubectl get broker -n ${NAMESPACE} NAME READY REASON URL default True http://default-brokercell-ingress.cloud-run-events.svc.cluster.local/default/default
9. Event Discovery
You can discover what the registered sources are, the types of events they can emit, and how to configure triggers in order to consume them.
To see the list of different types of events:
gcloud beta events types list
To get more information about each event type:
gcloud beta events types describe google.cloud.pubsub.topic.v1.messagePublished
10. Create a Cloud Run Sink
As an event sink, deploy a Cloud Run service that logs the contents of the CloudEvent it receives.
You can use Knative's event_display that is already compiled and its container image built as part of Knative release. You can see the container image details in event-display.yaml:
... containers: - image: gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display@sha256:8da2440b62a5c077d9882ed50397730e84d07037b1c8a3e40ff6b89c37332b27
Deploy to Cloud Run
Deploy your containerized application to Cloud Run:
export SERVICE_NAME=event-display gcloud run deploy ${SERVICE_NAME} \ --namespace=${NAMESPACE} \ --image gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display@sha256:8da2440b62a5c077d9882ed50397730e84d07037b1c8a3e40ff6b89c37332b27
On success, the command line displays the service URL. You can now visit your deployed container by opening the service URL in any browser window.
11. Create an Event trigger for Cloud Pub/Sub
One way of receiving events is through Cloud Pub/Sub. Custom applications can publish messages to Cloud Pub/Sub and these messages can be delivered to Google Cloud Run sinks via Events for Cloud Run.
Create a topic
First, create a Cloud Pub/Sub topic. You can replace TOPIC_ID
with a unique topic name you prefer:
export TOPIC_ID=cr-gke-topic gcloud pubsub topics create ${TOPIC_ID}
Create a trigger
Before creating the trigger, get more details on the parameters you'll need to construct a trigger for events from Cloud Pub/Sub:
gcloud beta events types describe google.cloud.pubsub.topic.v1.messagePublished
Create a trigger to filter events published to the Cloud Pub/Sub topic to our deployed Cloud Run service:
gcloud beta events triggers create trigger-pubsub \ --namespace ${NAMESPACE} \ --target-service ${SERVICE_NAME} \ --type google.cloud.pubsub.topic.v1.messagePublished \ --parameters topic=${TOPIC_ID}
Test the trigger
You can check that the trigger is created by listing all triggers:
gcloud beta events triggers list
You might need to wait for up to 10 minutes for the trigger creation to be propagated and for it to begin filtering events.
In order to simulate a custom application sending message, you can use gcloud
to to fire an event:
gcloud pubsub topics publish ${TOPIC_ID} --message="Hello there"
The Cloud Run sink we created logs the body of the incoming message. You can view this in the Logs section of your Cloud Run instance:
Note that "Hello there" will be base64 encoded as it was sent by Pub/Sub and you will have to decode it if you want to see the original message sent.
Delete the trigger
Optionally, you can delete the trigger once done testing.
gcloud beta events triggers delete trigger-pubsub --namespace ${NAMESPACE}
12. Create an Event trigger for Audit Logs
You will set up a trigger to listen for events from Audit Logs. More specifically, you will look for Pub/Sub topic creation events in Audit Logs.
Enable Audit Logs
In order to receive events from a service, you need to enable audit logs. From the Cloud Console, select IAM & Admin > Audit Logs
from the upper left-hand menu. In the list of services, check Google Cloud Pub/Sub:
On the right hand side, make sure Admin, Read and Write are selected. Click save:
Test Audit Logs
To learn how to identify the parameters you'll need to set up an actual trigger, perform an actual operation.
For example, create a Pub/Sub topic:
gcloud pubsub topics create cre-gke-topic1
Now, let's see what kind of audit log this update generated. From the Cloud Console, select Logging > Logs Viewer
from the upper left-hand menu.
Under Query Builder,
choose Cloud Pub/Sub Topic
and Click Add
:
Once you run the query, you'll see logs for Pub/Sub topics and one of those should be google.pubsub.v1.Publisher.CreateTopic
:
Note the serviceName
, methodName
and resourceName
. We'll use these in creating the trigger.
Create a trigger
You are now ready to create an event trigger for Audit Logs.
You can get more details on the parameters you'll need to construct a trigger for events from Google Cloud sources by running the following command:
gcloud beta events types describe google.cloud.audit.log.v1.written
Create the trigger with the right filters:
gcloud beta events triggers create trigger-auditlog \ --namespace ${NAMESPACE} \ --target-service ${SERVICE_NAME} \ --type=google.cloud.audit.log.v1.written \ --parameters serviceName=pubsub.googleapis.com \ --parameters methodName=google.pubsub.v1.Publisher.CreateTopic
Test the trigger
List all triggers to confirm that trigger was successfully created:
gcloud beta events triggers list
Wait for up to 10 minutes for the trigger creation to be propagated and for it to begin filtering events. Once ready, it will filter create events and send them to the service. You're now ready to fire an event.
Create another Pub/Sub topic, as you did earlier:
gcloud pubsub topics create cre-gke-topic2
If you check the logs of the Cloud Run service in Cloud Console, you should see the received event:
Delete the trigger and topics
Optionally, you can delete the trigger once done testing:
gcloud beta events triggers delete trigger-auditlog
Also delete the topics:
gcloud pubsub topics delete cre-gke-topic1 cre-gke-topic2
13. Create an Event trigger for Cloud Storage
You will set up a trigger to listen for events from Cloud Storage.
Create a bucket
First, create a Cloud Storage bucket in the same region as the deployed Cloud Run service. You can replace BUCKET_NAME
with a unique name you prefer:
export BUCKET_NAME=[new bucket name] export REGION=europe-west1 gsutil mb -p $(gcloud config get-value project) \ -l $REGION \ gs://$BUCKET_NAME/
Setup Cloud Storage permissions
Before creating a trigger, you need to give the default service account for Cloud Storage permission to publish to Pub/Sub.
First, you need to find the Service Account that Cloud Storage uses to publish to Pub/Sub. You can use the steps outlined in Getting the Cloud Storage service account to get the service account or use the following command:
curl -X GET -H "Authorization: Bearer $(gcloud auth print-access-token)" \ "https://storage.googleapis.com/storage/v1/projects/$(gcloud config get-value project)/serviceAccount"
The service account should be listed under email_address
.
Assume the service account you found from above was service-XYZ@gs-project-accounts.iam.gserviceaccount.com
, set this to an environment variable:
export GCS_SERVICE_ACCOUNT=service-XYZ@gs-project-accounts.iam.gserviceaccount.com
Then, grant rights to that Service Account to publish to Pub/Sub:
gcloud projects add-iam-policy-binding ${GOOGLE_CLOUD_PROJECT} \ --member=serviceAccount:${GCS_SERVICE_ACCOUNT} \ --role roles/pubsub.publisher
Create a trigger
You are now ready to create an event trigger for Cloud Storage events.
You can get more details on the parameters you'll need to construct the trigger:
gcloud beta events types describe google.cloud.storage.object.v1.finalized
Create the trigger with the right filters:
gcloud beta events triggers create trigger-storage \ --namespace ${NAMESPACE} \ --target-service ${SERVICE_NAME} \ --type=google.cloud.storage.object.v1.finalized \ --parameters bucket=${BUCKET_NAME}
Test the trigger
List all triggers to confirm that trigger was successfully created:
gcloud beta events triggers list
Wait for up to 10 minutes for the trigger creation to be propagated and for it to begin filtering events. Once ready, it will filter create events and send them to the service.
You're now ready to fire an event.
Upload a random file to the Cloud Storage bucket:
echo "Hello World" > random.txt gsutil cp random.txt gs://${BUCKET_NAME}/random.txt
If you check the logs of the Cloud Run service in Cloud Console, you should see the received event:
Delete the trigger
Optionally, you can delete the trigger once done testing:
gcloud beta events triggers delete trigger-storage
14. Create an Event trigger for Cloud Scheduler
You will set up a trigger to listen for events from Cloud Scheduler.
Create an App Engine application
Cloud Scheduler currently needs users to create an App Engine application. Pick an App Engine Location and create the app:
export APP_ENGINE_LOCATION=europe-west gcloud app create --region=${APP_ENGINE_LOCATION}
Create a trigger
You can get more details on the parameters you'll need to construct a trigger for events from Google Cloud sources by running the following command:
gcloud beta events types describe google.cloud.scheduler.job.v1.executed
Pick a Cloud Scheduler location to create the scheduler:
export SCHEDULER_LOCATION=europe-west1
Create a Trigger that will create a job to be executed every minute in Google Cloud Scheduler and call the target service:
gcloud beta events triggers create trigger-scheduler \ --namespace ${NAMESPACE} \ --target-service=${SERVICE_NAME} \ --type=google.cloud.scheduler.job.v1.executed \ --parameters location=${SCHEDULER_LOCATION} \ --parameters schedule="* * * * *" \ --parameters data="trigger-scheduler-data"
Test the trigger
List all triggers to confirm that trigger was successfully created:
gcloud beta events triggers list
Wait for up to 10 minutes for the trigger creation to be propagated and for it to begin filtering events. Once ready, it will filter create events and send them to the service.
If you check the logs of the Cloud Run service in Cloud Console, you should see the received event.
Delete the trigger
Optionally, you can delete the trigger once done testing:
gcloud beta events triggers delete trigger-scheduler
15. Custom Events (Broker Endpoint)
In this part of the codelab, you will produce and consume custom events using the Broker.
Create Curl Pod to Produce Events
Events are usually created programmatically. However, in this step, you will use curl
to manually send individual events and see how these events are received by the correct consumer.
To create a Pod that acts as event producer, run the following command:
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: labels: run: curl name: curl namespace: $NAMESPACE spec: containers: - image: radial/busyboxplus:curl imagePullPolicy: IfNotPresent name: curl resources: {} stdin: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: File tty: true EOF
Verify that the curl Pod is working correctly. You should see a pod called curl
with Status=Running
:
kubectl get pod curl -n ${NAMESPACE}
Create a trigger
You will create a Trigger with a filter on the particular CloudEvents type (in this case alpha-type
) you will emit. Note that exact match filtering on any number of CloudEvents attributes as well as extensions are supported. If your filter sets multiple attributes, an event must have all of the attributes for the Trigger to filter it. Conversely, if you don't specify a filter, all events will be received in your Service.
Create the trigger:
gcloud beta events triggers create trigger-custom \ --namespace ${NAMESPACE} \ --target-service ${SERVICE_NAME} \ --type=alpha-type \ --custom-type
Test the trigger
List all triggers to confirm that trigger was successfully created:
gcloud beta events triggers list
Create an event by sending an HTTP request to the Broker. Remind yourself the Broker URL by running the following:
kubectl get brokers -n ${NAMESPACE} NAME READY REASON URL default True http://default-broker.<NAMESPACE>.svc.cluster.local
SSH into the curl
pod you created earlier:
kubectl --namespace ${NAMESPACE} attach curl -it
You have SSHed into the pod, and can now make a HTTP request. A prompt similar to the one below will appear:
Defaulting container name to curl. Use 'kubectl describe pod/curl -n default' to see all of the containers in this pod. If you don't see a command prompt, try pressing enter. [ root@curl:/ ]$
Create an event:
curl -v "<BROKER-URL>" \ -X POST \ -H "Ce-Id: my-id" \ -H "Ce-Specversion: 1.0" \ -H "Ce-Type: alpha-type" \ -H "Ce-Source: my-source" \ -H "Content-Type: application/json" \ -d '{"msg":"send-cloudevents-to-broker"}'
If the event has been received, you will receive an HTTP 202 Accepted
response. Exit the SSH session with Ctrl + D
Verify that the published event was sent by looking at the logs of the Cloud Run Service:
kubectl logs --selector serving.knative.dev/service=$SERVICE_NAME \ -c user-container -n $NAMESPACE --tail=100
Delete the trigger
Optionally, you can delete the trigger once done testing:
gcloud beta events triggers delete trigger-custom
16. Congratulations!
Congratulations for completing the codelab.
What we've covered
- Long term vision of Events for Cloud Run for Anthos
- Current state of Events for Cloud Run for Anthos
- Create a Cloud Run sink
- Create an Event trigger for Cloud Pub/Sub
- Create an Event trigger for Audit Logs
- Create an Event trigger for Cloud Storage
- Create an Event trigger for Cloud Scheduler
- Produce and consume custom events