Getting started with Cloud Functions (2nd gen)

1. Introduction

To get started writing Cloud Run functions, you can use the following codelabs:

Otherwise, this codelab walks you through how to create Cloud Functions (2nd gen).

Overview

Cloud Functions (2nd gen) is the next version of Google Cloud Functions, Google Cloud's Functions-as-a-Service offering. This new version comes with an advanced feature set and is now powered by Cloud Run and Eventarc, giving you more advanced control over performance and scalability, and more control around the functions runtime and triggers from over 90+ event sources.

This codelab will walk you through creating Cloud Functions that respond to HTTP calls, and get triggered by Pub/Sub messages and Cloud Audit Logs.

What's New?

This new version of Cloud Functions provides an enhanced FaaS experience powered by Cloud Run, Cloud Build, Artifact Registry and Eventarc.

Enhanced Infrastructure

  • Longer request processing: Run your Cloud Functions longer than the 5 minute default, making it easier to run longer request workloads such as processing large streams of data from Cloud Storage or BigQuery. For HTTP functions, this is up to 60 minutes. For event-driven functions, this is currently up to 10 minutes.
  • Larger instances: Take advantage of up to 16GB of RAM and 4 vCPUs on Cloud Functions allowing larger in-memory, compute-intensive and more parallel workloads.
  • Concurrency: Process up to 1000 concurrent requests with a single function instance, minimizing cold starts and improving latency when scaling.
  • Minimum instances: Provide for pre-warmed instances to cut your cold starts and make sure the bootstrap time of your application does not impact the application performance.
  • Traffic splitting: Support multiple versions of your functions, split traffic between different versions and roll your function back to a prior version.

Broader Event coverage and CloudEvents support

  • Eventarc Integration: Cloud Functions now includes native support for Eventarc, which brings over 90+ event sources using Cloud Audit logs (BigQuery, Cloud SQL, Cloud Storage...), and of course Cloud Functions still supports events from custom sources by publishing to Cloud Pub/Sub directly.
  • CloudEvent format: All event-driven functions adhere to industry standard CloudEvents ( cloudevents.io), regardless of the source, to ensure a consistent developer experience. The payloads are sent via a structured CloudEvent with a cloudevent.data payload and implement the CloudEvent standard.

What you'll learn

  • Overview of Cloud Functions (2nd gen).
  • How to write a function that responds to HTTP calls.
  • How to write a function that responds to Pub/Sub messages.
  • How to write a function that responds to Cloud Storage events.
  • How to write a function that responds to Cloud Audit Logs.
  • How to split traffic between two revisions.
  • How to get rid of cold starts with minimum instances.
  • How to set concurrency.

2. Setup and Requirements

Self-paced environment setup

  1. Sign-in to the Google Cloud Console and create a new project or reuse an existing one. If you don't already have a Gmail or Google Workspace account, you must create one.

b35bf95b8bf3d5d8.png

a99b7ace416376c4.png

bd84a6d3004737c5.png

  • The Project name is the display name for this project's participants. It is a character string not used by Google APIs. You can update it at any time.
  • The Project ID must be unique across all Google Cloud projects and is immutable (cannot be changed after it has been set). The Cloud Console auto-generates a unique string; usually you don't care what it is. In most codelabs, you'll need to reference the Project ID (it is typically identified as PROJECT_ID). If you don't like the generated ID, you may generate another random one. Alternatively, you can try your own and see if it's available. It cannot be changed after this step and will remain for the duration of the project.
  • For your information, there is a third value, a Project Number which some APIs use. Learn more about all three of these values in the documentation.
  1. Next, you'll need to enable billing in the Cloud Console to use Cloud resources/APIs. Running through this codelab shouldn't cost much, if anything at all. To shut down resources so you don't incur billing beyond this tutorial, you can delete the resources you created or delete the whole project. New users of Google Cloud are eligible for the $300 USD Free Trial program.

Start Cloud Shell

While Google Cloud can be operated remotely from your laptop, in this codelab you will be using Google Cloud Shell, a command line environment running in the Cloud.

From the Google Cloud Console, click the Cloud Shell icon on the top right toolbar:

55efc1aaa7a4d3ad.png

It should only take a few moments to provision and connect to the environment. When it is finished, you should see something like this:

7ffe5cbb04455448.png

This virtual machine is loaded with all the development tools you'll need. It offers a persistent 5GB home directory, and runs on Google Cloud, greatly enhancing network performance and authentication. All of your work in this codelab can be done within a browser. You do not need to install anything.

Set up gcloud

In Cloud Shell, make sure that your project ID is set and saved to a PROJECT_ID variable and REGION is set to us-west1:

gcloud config set project [YOUR-PROJECT-ID]
PROJECT_ID=$(gcloud config get-value project)
REGION=us-west1

Enable APIs

Enable all necessary services:

gcloud services enable \
  artifactregistry.googleapis.com \
  cloudfunctions.googleapis.com \
  cloudbuild.googleapis.com \
  eventarc.googleapis.com \
  run.googleapis.com \
  logging.googleapis.com \
  pubsub.googleapis.com

3. HTTP Function

For the first function, let's create an authenticated Node.js function that responds to HTTP requests. Let's also use a 10 minute timeout to showcase how a function can have more time to respond to HTTP requests.

Create

Create a folder for the app and navigate to the folder:

mkdir ~/hello-http && cd $_

Create an index.js file that simply responds to HTTP requests:

const functions = require('@google-cloud/functions-framework');

functions.http('helloWorld', (req, res) => {
  res.status(200).send('HTTP with Node.js in GCF 2nd gen!');
});

Create a package.json file to specify the dependencies:

{
  "name": "nodejs-functions-gen2-codelab",
  "version": "0.0.1",
  "main": "index.js",
  "dependencies": {
    "@google-cloud/functions-framework": "^2.0.0"
  }
}

Deploy

Deploy the function:

gcloud functions deploy nodejs-http-function \
  --gen2 \
  --runtime nodejs16 \
  --entry-point helloWorld \
  --source . \
  --region $REGION \
  --trigger-http \
  --timeout 600s

Although not strictly necessary for this step, note the timeout of 600 seconds. This allows the function to have a longer timeout to respond to HTTP requests.

Once the function is deployed, you can see it under the Cloud Functions section of the Cloud Console:

7541800e1e3f299f.png

Test

Test the function with the following command:

gcloud functions call nodejs-http-function \
  --gen2 --region $REGION

You should see the message HTTP with Node.js in GCF 2nd gen! as a response.

4. Pub/Sub Function

For the second function, let's create a Python function triggered by a Pub/Sub message published to a specific topic.

Set up Pub/Sub auth tokens

If you enabled the Pub/Sub service account on or before April 8, 2021, grant the iam.serviceAccountTokenCreator role to the Pub/Sub service account:

PROJECT_NUMBER=$(gcloud projects list --filter="project_id:$PROJECT_ID" --format='value(project_number)')

gcloud projects add-iam-policy-binding $PROJECT_ID \
  --member  serviceAccount:service-$PROJECT_NUMBER@gcp-sa-pubsub.iam.gserviceaccount.com \
  --role roles/iam.serviceAccountTokenCreator

Create

Create a Pub/Sub topic to use for the sample:

TOPIC=cloud-functions-gen2-topic
gcloud pubsub topics create $TOPIC

Create a folder for the app and navigate to the folder:

mkdir ~/hello-pubsub && cd $_

Create a main.py file that simply logs a message containing the CloudEvent ID:

import functions_framework

@functions_framework.cloud_event
def hello_pubsub(cloud_event):
   print('Pub/Sub with Python in GCF 2nd gen! Id: ' + cloud_event['id'])

Create a requirements.txt file with the following contents to specify the dependencies:

functions-framework==3.*

Deploy

Deploy the function:

gcloud functions deploy python-pubsub-function \
  --gen2 \
  --runtime python39 \
  --entry-point hello_pubsub \
  --source . \
  --region $REGION \
  --trigger-topic $TOPIC

Once the function is deployed, you can see it under the Cloud Functions section of the Cloud Console:

107029714c32baff.png

Test

Test the function by sending a message to the topic:

gcloud pubsub topics publish $TOPIC --message="Hello World"

You should see the received CloudEvent in the logs:

gcloud functions logs read python-pubsub-function \
  --region $REGION --gen2 --format "value(log)"

5. Cloud Storage Function

For the next function, let's create a Node.js function that responds to events from a Cloud Storage bucket.

Set up

To use Cloud Storage functions, grant the pubsub.publisher IAM role to the Cloud Storage service account:

SERVICE_ACCOUNT=$(gsutil kms serviceaccount -p $PROJECT_NUMBER)

gcloud projects add-iam-policy-binding $PROJECT_ID \
  --member serviceAccount:$SERVICE_ACCOUNT \
  --role roles/pubsub.publisher

Create

Create a folder for the app and navigate to the folder:

mkdir ~/hello-storage && cd $_

Create an index.js file that simply responds to Cloud Storage events:

const functions = require('@google-cloud/functions-framework');

functions.cloudEvent('helloStorage', (cloudevent) => {
  console.log('Cloud Storage event with Node.js in GCF 2nd gen!');
  console.log(cloudevent);
});

Create a package.json file to specify the dependencies:

{
  "name": "nodejs-functions-gen2-codelab",
  "version": "0.0.1",
  "main": "index.js",
  "dependencies": {
    "@google-cloud/functions-framework": "^2.0.0"
  }
}

Deploy

First, create a Cloud Storage bucket (or use an existing bucket you already have):

​​export BUCKET="gs://gcf-gen2-storage-$PROJECT_ID"
gsutil mb -l $REGION $BUCKET

Deploy the function:

gcloud functions deploy nodejs-storage-function \
  --gen2 \
  --runtime nodejs16 \
  --entry-point helloStorage \
  --source . \
  --region $REGION \
  --trigger-bucket $BUCKET \
  --trigger-location $REGION

Once the function is deployed, you can see it under the Cloud Functions section of the Cloud Console.

Test

Test the function by uploading a file to the bucket:

echo "Hello World" > random.txt
gsutil cp random.txt $BUCKET/random.txt

You should see the received CloudEvent in the logs:

gcloud functions logs read nodejs-storage-function \
  --region $REGION --gen2 --limit=100 --format "value(log)"

6. Cloud Audit Logs Function

For the next function, let's create a Node.js function that receives a Cloud Audit Log event when a Compute Engine VM instance is created. In response, it adds a label to the newly created VM, specifying the creator of the VM.

Determine newly created Compute Engine VMs

Compute Engine emits 2 Audit Logs when a VM is created.

The first one is emitted at the beginning of the VM creation and looks like this:

8d394a481644c4b6.png

The second one is emitted after the VM creation and looks like this:

ee0e221d82887cd1.png

Notice the operation field with the first: true and last: true values. The second Audit Log contains all the information we need to label an instance, therefore we will use the last: true flag to detect it in Cloud Functions.

Set up

To use Cloud Audit Log functions, you must enable Audit Logs for Eventarc. You also need to use a service account with the eventarc.eventReceiver role.

  1. Enable Cloud Audit Logs Admin Read, Data Read, and Data Write log types for the Compute Engine API:

76b7417ea4071241.png

  1. Grant the default Compute Engine service account the eventarc.eventReceiver IAM role:
gcloud projects add-iam-policy-binding $PROJECT_ID \
  --member serviceAccount:$PROJECT_NUMBER-compute@developer.gserviceaccount.com \
  --role roles/eventarc.eventReceiver

Get the code

Clone the repo that contains the application:

git clone https://github.com/GoogleCloudPlatform/eventarc-samples.git

Navigate to the app directory:

cd eventarc-samples/gce-vm-labeler/gcf/nodejs

The index.js file contains the application code that receives the Audit Log wrapped into a CloudEvent. It then extracts the Compute Engine VM instance details and sets a label on the VM instance. Feel free to study index.js in more detail on your own.

Deploy

You can deploy the function with gcloud as before. Notice how the function is filtering on Audit Logs for Compute Engine insertions with the --trigger-event-filters flag:

gcloud functions deploy gce-vm-labeler \
  --gen2 \
  --runtime nodejs16 \
  --entry-point labelVmCreation \
  --source . \
  --region $REGION \
  --trigger-event-filters="type=google.cloud.audit.log.v1.written,serviceName=compute.googleapis.com,methodName=beta.compute.instances.insert" \
  --trigger-location us-central1

You can also deploy the function and add an Eventarc trigger from Google Cloud Console.

First, go to the Cloud Functions section and create a function with 2nd gen environment:

8ba79a12ee152d8.png

Click on the Add Eventarc Trigger button:

655346320a5e3631.png

This opens up a side panel on the right where you can choose different event providers and events for the Eventarc trigger.

Choose the right event provider and event and then click on Save Trigger:

7d24325ff06c9b05.png

Finally, on the next page, you can update index.js and package.json files with index.js and package.json files on GitHub and click on the Deploy button:

f2e338eed2ccf5a2.png

Test

To test your Audit Log function, you need to create a Compute Engine VM in the Cloud Console (You can also create VMs with gcloud but it does not seem to generate Audit Logs).

Go to the Compute Engine > VM instances section of Cloud Console and create a new VM. Once the VM creation completes, you should see the added creator label on the VM in the Cloud Console in the Basic information section or using the following command:

gcloud compute instances describe YOUR_VM_NAME

You should see the label in the output like the following example:

...
labelFingerprint: ULU6pAy2C7s=
labels:
  creator: atameldev
...

7. Traffic splitting

Cloud Functions (2nd gen) supports multiple revisions of your functions, splitting traffic between different revisions and rolling your function back to a prior version. This is possible because 2nd gen functions are Cloud Run services under the hood.

In this step, you will deploy 2 revisions of a function and then split the traffic between them 50-50.

Create

Create a folder for the app and navigate to the folder:

mkdir ~/traffic-splitting && cd $_

Create a main.py file with a Python function that reads a color environment variable and responds back with Hello World in that background color:

import os

color = os.environ.get('COLOR')

def hello_world(request):
    return f'<body style="background-color:{color}"><h1>Hello World!</h1></body>'

Deploy

Deploy the first revision of the function with an orange background:

COLOR=orange
gcloud functions deploy hello-world-colored \
  --gen2 \
  --runtime python39 \
  --entry-point hello_world \
  --source . \
  --region $REGION \
  --trigger-http \
  --allow-unauthenticated \
  --update-env-vars COLOR=$COLOR

At this point, if you test the function by viewing the HTTP trigger (the URI output of the above deployment command) in your browser, you should see Hello World with an orange background:

36ca0c5f39cc89cf.png

Deploy the second revision with a yellow background:

COLOR=yellow
gcloud functions deploy hello-world-colored \
  --gen2 \
  --runtime python39 \
  --entry-point hello_world \
  --source . \
  --region $REGION \
  --trigger-http \
  --allow-unauthenticated \
  --update-env-vars COLOR=$COLOR

Since this is the latest revision, if you test the function, you should see Hello World with a yellow background:

391286a08ad3cdde.png

Split the traffic 50-50

To split the traffic between the orange and yellow revisions, you need to find the revision IDs of the underlying Cloud Run services. This is the command to see the revision IDs:

gcloud run revisions list --service hello-world-colored \
  --region $REGION --format 'value(REVISION)'

The output should be similar to the following:

hello-world-colored-00001-man
hello-world-colored-00002-wok

Now, split the traffic between these two revisions as follows (update the X-XXX according to your revision names):

gcloud run services update-traffic hello-world-colored \
  --region $REGION \
  --to-revisions hello-world-colored-0000X-XXX=50,hello-world-colored-0000X-XXX=50

Test

Test the function by visiting its public URL. Half of the time, you should see the orange revision and, the other half, the yellow revision:

36ca0c5f39cc89cf.png 391286a08ad3cdde.png

See rollbacks, gradual rollouts, and traffic migration for more information.

8. Minimum instances

In Cloud Functions (2nd gen), one can specify a minimum number of function instances to be kept warm and ready to serve requests. This is useful in limiting the number of cold starts.

In this step, you will deploy a function with slow initialization. You'll observe the cold start problem. Then, you will deploy the function with the minimum instance value set to 1 to get rid of the cold start.

Create

Create a folder for the app and navigate to it:

mkdir ~/min-instances && cd $_

Create a main.go file. This Go service has an init function that sleeps for 10 seconds to simulate a long initialization. It also has a HelloWorld function that responds to HTTP calls:

package p

import (
        "fmt"
        "net/http"
        "time"
)

func init() {
        time.Sleep(10 * time.Second)
}

func HelloWorld(w http.ResponseWriter, r *http.Request) {
        fmt.Fprint(w, "Slow HTTP Go in GCF 2nd gen!")
}

Deploy

Deploy the first revision of the function with the default minimum instance value of zero:

gcloud functions deploy slow-function \
  --gen2 \
  --runtime go116 \
  --entry-point HelloWorld \
  --source . \
  --region $REGION \
  --trigger-http \
  --allow-unauthenticated

Test the function with this command:

gcloud functions call slow-function \
  --gen2 --region $REGION

You will observe a 10 second delay (cold start) on the first call and then see the message. Subsequent calls should return immediately.

Set minimum instances

To get rid of the cold start on the first request, redeploy the function with the --min-instances flag set to 1 as follows:

gcloud functions deploy slow-function \
  --gen2 \
  --runtime go116 \
  --entry-point HelloWorld \
  --source . \
  --region $REGION \
  --trigger-http \
  --allow-unauthenticated \
  --min-instances 1

Test

Test the function again:

gcloud functions call slow-function \
  --gen2 --region $REGION

You should not see the 10 second delay anymore in the first request. The cold start problem for the first invocation (after a long time without) is gone, thanks to minimum instances!

See using minimum instances for more information.

9. Concurrency

In Cloud Functions (2nd gen), a function instance handles 1 concurrent request by default but you can specify the number of concurrent requests that can be processed simultaneously by an instance. This can also be useful in preventing cold starts as a new function instance does not need to be created for every parallel request.

In this step, you will use the function with slow initialization from the previous step. You will send it 10 requests and observe the cold start problem again as new function instances need to be created to handle the requests.

To fix the cold-start problem, you will deploy another function with a concurrency value of 100. You will observe that the 10 requests now do not cause the cold start problem and a single function instance can handle all the requests.

Test without concurrency

Get the URL of the function:

SLOW_URL=$(gcloud functions describe slow-function --region $REGION --gen2 --format="value(serviceConfig.uri)")

Use an open source benchmarking tool called hey to send 10 concurrent requests to the slow function. hey is already installed in Cloud Shell:

hey -n 10 -c 10 $SLOW_URL

You should see in the output of hey that some requests are taking long:

Summary:
  Total:        10.9053 secs
  Slowest:      10.9048 secs
  Fastest:      0.4439 secs
  Average:      9.7930 secs
  Requests/sec: 0.9170

  Total data:   310 bytes
  Size/request: 31 bytes

Response time histogram:
  0.444 [1]     |■■■■
  1.490 [0]     |
  2.536 [0]     |
  3.582 [0]     |
  4.628 [0]     |
  5.674 [0]     |
  6.720 [0]     |
  7.767 [0]     |
  8.813 [0]     |
  9.859 [0]     |
  10.905 [9]    |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■

This is because more function instances are being created to handle the requests. If you check the active instances count for the function, you should also see that more than one instance was created as some point and these are causing the cold start problem:

9f5c6877836d62fb.png

Deploy

Deploy a new function identical to the previous function. Once deployed, you will increase its concurrency:

gcloud functions deploy slow-concurrent-function \
  --gen2 \
  --runtime go116 \
  --entry-point HelloWorld \
  --source . \
  --region $REGION \
  --trigger-http \
  --allow-unauthenticated \
  --min-instances 1

Set concurrency

Set the concurrency of the underlying Cloud Run service for the function to 100 (it can be maximum 1000). This ensures that at least 100 requests can be handled by a single function instance:

gcloud run services update slow-concurrent-function \
  --concurrency 100 \
  --cpu 1 \
  --region $REGION 

Test with concurrency

Get the URL of the function:

SLOW_CONCURRENT_URL=$(gcloud functions describe slow-concurrent-function --region $REGION --gen2 --format="value(serviceConfig.uri)")

Then, use hey to send 10 concurrent requests:

hey -n 10 -c 10 $SLOW_CONCURRENT_URL

You should see in the output of hey that all requests are processed quickly:

Summary:
  Total:        0.2164 secs
  Slowest:      0.2163 secs
  Fastest:      0.0921 secs
  Average:      0.2033 secs
  Requests/sec: 46.2028

  Total data:   310 bytes
  Size/request: 31 bytes

Response time histogram:
  0.092 [1]     |■■■■
  0.105 [0]     |
  0.117 [0]     |
  0.129 [0]     |
  0.142 [0]     |
  0.154 [0]     |
  0.167 [0]     |
  0.179 [0]     |
  0.191 [0]     |
  0.204 [0]     |
  0.216 [9]     |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■

A single function instance was able to handle all the requests and the cold start problem is gone, thanks to the increased concurrency!

See concurrency for more information.

10. Congratulations!

Congratulations for completing the codelab!

What we've covered

  • Overview of Cloud Functions (2nd gen).
  • How to write a function that responds to HTTP calls.
  • How to write a function that responds to Pub/Sub messages.
  • How to write a function that responds to Cloud Storage events.
  • How to write a function that responds to Cloud Audit Logs.
  • How to split traffic between two revisions.
  • How to get rid of cold starts with minimum instances.
  • How to set concurrency.