Use Stackdriver Logging and Stackdriver Trace for Cloud Functions

1. Introduction

In this codelab, you'll learn to leverage the logging and monitoring tools available to all developers who work with Cloud Functions. The tools come with every Cloud Function that you deploy across all supported languages and should enable you to be more productive when writing and operating your serverless code.


You'll use an HTTP-triggered Cloud Function, but everything that you cover also applies to other languages and to Cloud Functions triggered by other events.

2. Setup and requirements

Self-paced environment setup

  1. Sign in to Cloud Console and create a new project or reuse an existing one. (If you don't already have a Gmail or G Suite account, you must create one.)




Remember the project ID, a unique name across all Google Cloud projects (the name above has already been taken and will not work for you, sorry!). It will be referred to later in this codelab as PROJECT_ID.

  1. Next, you'll need to enable billing in Cloud Console in order to use Google Cloud resources.

Running through this codelab shouldn't cost much, if anything at all. Be sure to to follow any instructions in the "Cleaning up" section which advises you how to shut down resources so you don't incur billing beyond this tutorial. New users of Google Cloud are eligible for the $300USD Free Trial program.

Cloud Shell

While Cloud Functions, and its logging and monitoring capabilities can be used remotely from your laptop, you'll use Cloud Shell, a command-line environment running in Google Cloud.

This Debian-based virtual machine is loaded with all the development tools you'll need. It offers a persistent 5GB home directory and runs in Google Cloud, greatly enhancing network performance and authentication. This means that all you will need for this codelab is a browser (yes, it works on a Chromebook).

  1. To activate Cloud Shell from the Cloud Console, simply click Activate Cloud Shell fEbHefbRynwXpq1vj2wJw6Dr17O0np8l-WOekxAZYlZQIORsWQE_xJl-cNhogjATLn-YxLVz8CgLvIW1Ncc0yXKJsfzJGMYgUeLsVB7zSwz7p6ItNgx4tXqQjag7BfWPcZN5kP-X3Q (it should only take a few moments to provision and connect to the environment).


Screen Shot 2017-06-14 at 10.13.43 PM.png

Once connected to Cloud Shell, you should see that you are already authenticated and that the project is already set to your PROJECT_ID.

gcloud auth list

Command output

Credentialed accounts:
 - <myaccount>@<mydomain>.com (active)
gcloud config list project

Command output

project = <PROJECT_ID>

If, for some reason, the project is not set, simply issue the following command:

gcloud config set project <PROJECT_ID>

Looking for your PROJECT_ID? Check out what ID you used in the setup steps or look it up in the Cloud Console dashboard:


Cloud Shell also sets some environment variables by default, which may be useful as you run future commands.


Command output

  1. Finally, set the default zone and project configuration.
gcloud config set compute/zone us-central1-f

You can choose a variety of different zones. For more information, see Regions & Zones.

3. Deploy a simple Cloud Function

In order to have something to monitor, create a "Hello, World" Cloud Function. In the left menu of the Google Cloud Console, click on Cloud Functions, then click Create Function.


Enter "hello-monitor" as the name of your new Cloud Function.


Keep all the defaults for the source code. (However, you can choose a different language/runtime if you'd like.)


Finally, click Create.


You should see your Cloud Function listed with a green checkmark next to it, which means that it's ready to be invoked.


4. Test the Cloud Function and send traffic using a load generator

Now that the Cloud Function has been successfully deployed, test it from the command line.

First, using Cloud Shell, emit the following command:

$ gcloud functions describe hello-monitor

That should return a description of the Cloud Function, including a URL for httpsTrigger, which is the HTTP(S) endpoint to invoke the Cloud Function. It should look like the following: https://<region>-<project-id>

Triggering the Cloud Function should now be as simple as using the curl command on that URL.

$ curl https://<region>-<project-id>
Hello World!

Now use Vegeta, a simple HTTP load-testing tool. To install it, from Cloud Shell, simply type the following command :

$ go get -u

To send some traffic to your Cloud Function (five requests per second for a couple of minutes), use the following command:

$ echo "GET https://<region>-<project-id>" \
   | vegeta attack -rate=5 -duration=120s \
   > results.bin

5. Navigate through the logs

From your Cloud Function details view, click View Logs.


That should lead you to the Stackdriver Logging section of your project, showing you only your Cloud Function logs.


All requests to your Cloud Function should return a 200 status code.

When viewing the logs, you can do the following:

  • Filter by log level (in your case, all logs are debug level).
  • Select a specific timeframe (relative or absolute).
  • Enable log streaming (with Play 751a4600016f34a7.pngat the top of the screen).
  • Copy a link to the log entry (for sharing with team members).
  • Show a log entry in resource context.
  • Pin a log entry (as a visual cue).
  • Export logs to BigQuery, Cloud Storage, or Pub/Sub (or simply download them as JSON or CSV files).

6. Update the function

Using the Cloud Console, navigate to the Function details view and observe the spike that you created with your load tester in the number of invocations per second and their execution time.

aaee3159bbe395d3.png 7ed347101da5eca0.png

Another more detailed tool to observe latency and RPC calls is Stackdriver Trace, but before you can use it, you need to make a few changes to your Cloud Functions. Do the following:

  1. Add the life-saving node-emoji package as a dependency.
  2. Update the function code to use the node-emoji module and introduce some latency.
  3. Add an environment variable to enable Stackdriver Trace for Cloud Functions.

From Function details, click Edit to modify the function.


Edit the package.json file to add a dependency for the node-emoji package.

  "name": "sample-http",
  "version": "0.0.1",
  "dependencies": {
    "node-emoji": "^1.8.1"

Edit the actual function by changing the content of index.js to the following:

const emoji = require('node-emoji');

exports.helloWorld = (req, res) => {
  let message = req.query.message || req.body.message || 'Hello World!';

  // add some latency, functions are too quick!
  setTimeout(function() {
     message += emoji.random().emoji;  
  }, (3 * 100));  // 300ms

That adds a random emoji to the message returned by the Cloud Function after it pauses for 300 milliseconds.

Finally, add a Cloud Function environment variable called GOOGLE_CLOUD_TRACE_ENABLED and set to true as follows:


Click Save.

Go back to Cloud Shell and recall the command to generate some load on the newly deployed Cloud Function:

$ echo "GET https://<region>-<project-id>" \
   | vegeta attack -rate=5 -duration=120s \
   > results.bin

Now you're ready to observe the list of traces produced with no other setup requirements and no specific tracing library in your code!

7. Trace the updated Cloud Function

Using the left-hand menu, navigate to Trace list (under Stackdriver Trace).


You should see something similar to the following screenshot:


That should make it quite obvious that the latency introduced in your Cloud Function is indeed measured at 300 milliseconds.

Each dot on the graph is a request for which you can view detailed information, such as timestamp, the HTTP method and status, its labels, a link to the corresponding log entry, and any subsequent RPC call that the Cloud Function makes.


If you'd like to zoom in, simply click and drag on the graph. Selecting a custom time range in the trace graph

To zoom out, click Undo Zoom at the top of the page.

Because you deployed a single Cloud Function, the graph shows only GET requests on the hello-monitor URI, but you can filter traces by HTTP method (GET, POST, DELETE) by HTTP status (2XX, 3XX) or by using the request filter.

Navigate to Overview in the left-hand menu:


From this overview page, you can find recent traces and other insights.


You can also find the ability to create custom reports based on a combination of a URI request filter, an HTTP method, an HTTP status, and a time range. It even lets you compare the generated values to a time baseline.


If you manage to set up the correct time ranges with enough data points, you can produce a report that shows the important latency shift between the initial Cloud Function and the new one.



Such a custom report can be used to find when a performance problem was introduced and to track a service level indicator (SLI), such as end-user request latency.

8. Time to clean up the resources

That concludes the codelab!

While Cloud Functions and Stackdriver tools are serverless platforms that don't incur costs when not in use, be a good cloud citizen and delete your Cloud Function. Simply select hello-monitor in Overview under Cloud Functions and click Delete.


9. What's next?

Here's some follow-up reading: