Actions on Google is a developer platform that lets you create software to extend the functionality of the Google Assistant, Google's virtual personal assistant, across more than 500 million devices, including smart speakers, phones, cars, TVs, headphones, and more. Users engage Google Assistant in conversation to get things done, like buying groceries or booking a ride (for a complete list of what's possible now, see the Actions directory.) As a developer, you can use Actions on Google to easily create and manage delightful and effective conversational experiences between users and your own 3rd-party service.

This codelab module is part of a multi-module tutorial. Each module can be taken standalone or in a learning sequence with other modules. In each module, you'll be provided with end-to-end instructions on how to build an Action from given software requirements. We'll also teach the necessary concepts and best practices for implementing Actions that give users high-quality conversational experiences.

This codelab covers intermediate level concepts for developing with Actions on Google. We strongly recommend that you familiarize yourself with the topics covered in Build Actions for the Google Assistant (Level 1) before starting this codelab.

What you'll build

In this codelab, you'll build a sophisticated conversational Action with multiple features:

What you'll learn

What you'll need

The following tools must be in your environment:

Familiarity with JavaScript (ES6) is strongly recommended, although not required, to understand the webhook code used in this codelab.

Optional: Get the sample code

You can optionally get the full project code for this codelab from our GitHub repository.

The Firebase Command Line Interface (CLI) will allow you to deploy your Actions project to Cloud Functions.

To install or upgrade the CLI run the following npm command:

npm -g install firebase-tools

To verify that the CLI has been installed correctly, open a terminal and run:

firebase --version

Make sure the version of the Firebase CLI is above 3.5.0 so that it has all the latest features required for Cloud Functions. If not, run npm install -g firebase-tools to upgrade as shown above.

Authorize the Firebase CLI by running:

firebase login

In Build Actions for the Google Assistant (Level 1), we used the Dialogflow inline editor to quickly get you started on your first Actions project.

For this codelab, you're going to start with the Dialogflow intents from the Level 1 codelab, but develop and deploy the webhook locally on your machine using Cloud Functions for Firebase.

Why develop your Actions locally?

In contrast to using the Dialogflow inline editor, you can use a local machine, which gives you more control over your programming and deployment environment. This provides several advantages:

Download your base files

To get the base files for this codelab, run the following command to clone the GitHub repository for the Level 1 codelab.

git clone https://github.com/actions-on-google/codelabs-nodejs

This repository contains the following important files:

For the sake of clarity, we strongly recommend you rename the /level1-complete directory name to /level2. You can do so by using the mv command in your terminal. For example:

$ cd codelabs-nodejs
$ mv ./level1-complete ./level2

Set up your project and agent

Next, you'll need to set up the Actions project and the Dialogflow agent for your codelab.

If you've already completed the Build Actions for the Google Assistant (Level 1) codelab, do the following:

  1. Go directly to the Dialogflow Console.
  2. Navigate to Fulfillment.
  3. Disable Inline Editor.

If you're starting from scratch, do the following:

  1. Open the Actions Console.
  2. Click on Add/import project.
  3. Type in a Project name, like "actions-codelab-2". This name is for your own internal reference; later on, you can set an external name for your project.
  4. Click Create Project.
  5. Rather than pick a category, click Skip on the upper-right corner.
  6. Click Build > Actions in the left nav.
  7. Click Add your first Action.
  8. Select at least one language for your Action, followed by Update. For this codelab, we recommend only selecting English.
  9. On the Custom intent dialog, click Build to launch the Dialogflow Console.
  10. Click Create.
  11. Click the gear icon on the left navigation.
  12. Click Export and Import.
  13. Click Restore From Zip.

  1. Upload the codelab-level-one.zip file.
  2. Type "RESTORE" and click the Restore button.
  3. Click Done.

Deploy your fulfillment

Now that your Actions project and Dialogflow agent are ready, do the following to deploy your local index.js file using the Firebase Functions CLI:

  1. In a terminal, navigate to the /level2/functions directory of your base files clone.
  2. Using the Actions project ID you set, run the following command:
firebase use <PROJECT_ID>

  1. Run the following command in the terminal to install dependencies.
npm install
  1. Run the following command in the terminal to deploy your webhook to Firebase.
firebase deploy

After a few minutes, you should see "Deploy complete!" indicating that you've successfully deployed your webhook to Firebase.

Retrieve the deployment URL

You need to provide Dialogflow with the URL to the cloud function. To retrieve this URL, follow these steps:

  1. Open the Firebase Console.
  2. Select your Actions project from the list of options.
  3. Navigate to Develop > Functions on the left navigation bar.
  4. Under the Dashboard tab, you should see an entry for "dialogflowFirebaseFulfillment" with a URL under Event. Copy this URL.

Set the URL in Dialogflow

Now you need to update your Dialogflow agent to use your webhook for fulfillment. To do so, follow these steps:

  1. Open the Dialogflow Console (you can close the Firebase console if you'd like).
  2. Navigate to Fulfillment on the left navigation.
  3. Enable Webhook.
  4. Paste the URL you copied from the Firebase dashboard if it doesn't already appear.
  5. Click Save.

Verify your project is correctly set up

At this point, users can start a conversation by explicitly invoking your Action. Once users are mid-conversation, they can trigger the ‘favorite color' custom intent by providing a color. Dialogflow will parse the user's input to extract the information your fulfillment needs—namely, the color—and send this to your fulfillment. Your fulfillment then auto-generates a lucky number to send back to the user.

To test out your Action in the Actions console simulator:

  1. In the Dialogflow Console left navigation, click on Integrations > Google Assistant.
  2. Make sure Auto-preview changes is enabled and click Test to update your Actions project.
  3. The Actions Console simulator loads your Actions project. To test your Action, type "Talk to my test app" into the Input field and hit enter.
  4. You should see a response: "Welcome! What is your favorite color?"
  5. Type "blue".
  6. You should see a response: "Your lucky number is 4."

Your Actions project always has an invocation name, like "Google IO 18". When users say "Talk to Google IO 18", this triggers the Dialogflow welcome intent. Every Dialogflow agent has one welcome intent which acts as an entry point for users to start conversations.

Most of the time, users would rather jump to the specific task they want to accomplish than start at the beginning of the conversation every time. You can provide explicit deep links and implicit invocations as shortcuts into the conversation to help users get things done more efficiently.

Adding deep links and implicit invocations to your Actions is a simple, single-step process using the Google Assistant integration page in the Dialogflow Console.

Add intent for deep linking and implicit invocation

In your Actions project, you should have defined a custom Dialogflow intent called ‘favorite color' in an agent (this was covered in the Level 1 codelab). The agent parses your training phrases, like "I love yellow" and "Purple is my favorite," extracts the color parameter from each phrase, and makes it available to your fulfillment.

For this codelab, you're going to add the ‘favorite color' intent as an implicit invocation, meaning that users can invoke that intent and skip the welcome intent. Doing this also enables users to explicitly invoke the ‘favorite color' intent as a deep link (for example, "Hey Google, talk to my test app about blue"). The training phrases and parameters you defined for the ‘favorite color' intent enable Dialogflow to extract the color parameter when users invoke this deep link.

To add your intent for deep linking and implicit invocation, do the following:

  1. In the Dialogflow Console left navigation, click on Integrations.
  2. In the Google Assistant card, click Integration Settings.
  3. Under Discovery > Implicit Invocations, click on Add intent followed by favorite color.

The Assistant will now listen for users to provide a color in their invocation and extract the color parameter for your fulfillment.

Test your deep link

To test out your deep link in the Actions console simulator:

  1. Click Test to update your Actions project.
  2. Type "Talk to my test app about blue" into the Input field and hit enter.

Define a custom fallback intent

It's good practice to create a custom fallback intent to handle invocation phrases that don't provide the parameters you are looking for. For example, if instead of saying a color, the user might say something unexpected like "Talk to my test app about bananas". The term "bananas" would not fit into any of our Dialogflow intents, so we'd need to build a catch-all intent.

Since the Assistant now listens for any phrases which match the ‘favorite color' intent, you should provide a custom fallback intent specific for catching anything else.

To set up your custom fallback intent, do the following:

  1. In the Dialogflow Console, click on Intents in the left-navigation, then click Create Intent.
  2. Name your intent ‘Unrecognized Deep Link' or equivalent. This intent name won't be referenced in your webhook so you can call it whatever you like.
  1. Under Contexts, click Add input context and type "google_assistant_welcome". By specifying that this intent uses the ‘google_assistant_welcome' input context, it can only be invoked at the start of the conversation. After you've entered your input context, "google_assistant_welcome" will appear as an output context as well. Click the x to remove that output context.

  1. Under Training phrases, add "banana" (or any other noun) as a user expression.
  2. We'll use the @sys.any entity to tell Dialogflow to generalize the expression to any grammar (not just "banana"). Double-click on "banana" and filter for or select @sys.any
  3. A warning message will pop up not to use the @sys.any entity. You can safely ignore this for now. Click OK. (Generally, it's not advisable to use the @sys.any entity, since it can overpower any other intent's speech biasing, but this is a special case where we ensure this will only be triggered at invocation time when other intents have not been matched.)

  1. Under Responses, add "Sorry, I am not sure about $any . What's your favorite color?" as a Text response.

  1. Click Save.

Test your custom fallback intent

To test out your custom fallback intent in the Actions console simulator, type "Talk to my test app about banana" into the Input field and hit enter.

You can make your Actions more engaging and interactive by using personalized information from the user. To request access to user information, your webhook can use helper intents to obtain values with which to personalize your responses.

Get user information using permission helper intent

You can use the actions_intent_PERMISSION helper intent to obtain the user's display name, with their permission. To use the permission helper intent:

  1. In the Dialogflow Console, navigate to Intents.
  2. Select the Default Welcome Intent.
  3. Under Fulfillment, turn on Enable webhook call for this intent. Note that the response from the webhook will override any response you typed into Text responses above.
  4. Click Save.
  5. Navigate back to Intents.
  6. Click Create Intent.
  7. Name your intent "actions_intent_PERMISSION".
  1. Under Events, click Add event and type "actions_intent_PERMISSION".
  2. Under Fulfillment, turn on Enable webhook call for this intent.
  3. Click Save.
  4. In the terminal, navigate to the /level2/functions folder and open the index.js file in any text editor in your local machine.
  5. Replace this code:
const {dialogflow} = require('actions-on-google');

with this:

index.js

// Import the Dialogflow module from the Actions on Google client library.
const {
  dialogflow,
  Permission,
} = require('actions-on-google');
  1. Append the following code to the end of the file:

index.js

// Handle the Dialogflow intent named 'Default Welcome Intent'.
app.intent('Default Welcome Intent', (conv) => {
  conv.ask(new Permission({
    context: 'Hi there, to get to know you better',
    permissions: 'NAME'
  }));
});
  1. Save your file.

Customize responses with user information

Next, you'll need to update your webhook to handle the response. You'll use the user's information in your response if they granted permission and gracefully move the conversation forward regardless if permission was not granted.

To respond to the user:

  1. Add the following code to index.js:

index.js

// Handle the Dialogflow intent named 'actions_intent_PERMISSION'. If user
// agreed to PERMISSION prompt, then boolean value 'permissionGranted' is true.
app.intent('actions_intent_PERMISSION', (conv, params, permissionGranted) => {
  if (!permissionGranted) {
    conv.ask(`Ok, no worries. What's your favorite color?`);
  } else {
    conv.data.userName = conv.user.name.display;
    conv.ask(`Thanks, ${conv.data.userName}. What's your favorite color?`);
  }
});

You register a callback function to handle the actions_intent_PERMISSION intent you created earlier. In the callback, you first check whether the user granted permission to know their display name. The client library passes this argument to the callback function as the third parameter, here called permissionGranted.

The conv.user.name.display value represents the user's display name sent to our webhook as part of the HTTP request body. If the user grants permission, you store the value of conv.user.name.display in a property called userName of the conv.data object.

The conv.data object is a data structure provided by the client library for in-dialog storage. You can set and manipulate the properties on this object throughout the duration of the conversation for this user.

  1. Replace this code:
app.intent('favorite color', (conv, {color}) => {
    const luckyNumber = color.length;
    conv.close('Your lucky number is ' + luckyNumber);
});

with this:

index.js

// Handle the Dialogflow intent named 'favorite color'.
// The intent collects a parameter named 'color'
app.intent('favorite color', (conv, {color}) => {
  const luckyNumber = color.length;
  if (conv.data.userName) {
    conv.close(`${conv.data.userName}, your lucky number is ${luckyNumber}.`);
  } else {
    conv.close(`Your lucky number is ${luckyNumber}.`);
  }
});

Here you modify the callback function for the ‘favorite color' intent to use the userName property to address the user by name. If the conv.data object doesn't have a property called userName (that is, the user previously denied permission to know their name, so the property was never set) then your webhook still responds, but without the user's name.

  1. Save your file .
  2. In the terminal, run the following command to deploy your webhook to Firebase.
firebase deploy

Test your code

To test out your Action in the Actions console simulator:

  1. Type "Talk to my test app" into the Input field and hit enter.
  2. Type "yes".
  3. Type "blue".

You can embed SSML in your response strings to alter the sound of your spoken responses, or even embed sound effects or other audio clips.

The following shows an example of SSML markup:

<speak>
  Mandy, your lucky number is 5.
  <audio src="https://actions.google.com/sounds/v1/cartoon/clang_and_wobble.ogg"></audio>
</speak>

Use SSML to add sound effects

For this codelab, we'll use a sound clip from the Actions on Google sound library.

To add a sound effect to the ‘favorite color' response, do the following:

  1. Open index.js in an editor.
  2. Replace this code:
app.intent('favorite color', (conv, { color }) => {
  const luckyNumber = color.length;
  if (conv.data.userName) {
    conv.close(`${conv.data.userName}, your lucky number is ` + `${luckyNumber}.`);
  } else {
    conv.close(`Your lucky number is ` + `${luckyNumber}.`);
  }
});

with this:

index.js

// Handle the Dialogflow intent named 'favorite color'.
// The intent collects a parameter named 'color'
app.intent('favorite color', (conv, {color}) => {
 const luckyNumber = color.length;
 const audioSound = 'https://actions.google.com/sounds/v1/cartoon/clang_and_wobble.ogg';
 if (conv.data.userName) {
   // If we collected user name previously, address them by name and use SSML
   // to embed an audio snippet in the response.
   conv.close(`<speak>${conv.data.userName}, your lucky number is ` +
     `${luckyNumber}<audio src="${audioSound}"></audio>.</speak>`);
 } else {
   conv.close(`<speak>Your lucky number is ${luckyNumber}` +
     `<audio src="${audioSound}"></audio>.</speak>`);
 }
});

Here, you declare an audioSound variable containing the string URL for a statically hosted audio file on the web. You use the <speak> SSML tags around the strings for the user response, indicating to the Google Assistant that your response should be parsed as SSML.

The <audio> tag embedded in the string indicates that you want the Assistant to play some audio played at that point in the response. The src attribute of that tag indicates where the audio is hosted.

  1. Save your file.
  2. In the terminal, run the following command to deploy your webhook to Firebase.
firebase deploy

Test your code

To test out your Action in the Actions console simulator:

  1. Type "Talk to my test app" into the Input field and hit enter.
  2. Type "yes".
  3. Type "blue". If everything works correctly, the user should hear this sound effect in the response.

To keep the conversation going, you can add follow-up intents that will trigger based on the user's response after a particular intent. To add follow-up intents to ‘favorite color', do the following:

  1. In the Dialogflow Console left navigation, click on Intents.
  2. Hover your cursor over favorite color, then click Add follow-up intent. Do this twice, once selecting yes and again selecting no.
  1. Under favorite color - no, do the following:
  1. Under favorite color - yes, do the following:

As you expand your conversational app, you can use custom entities to further deepen and personalize the conversation. We'll cover how to do this in this section.

Add a custom entity

So far, you've only been using built-in entities (@sys.color, @sys.any) to match user input. You're going to create a custom entity (also called a developer entity) in Dialogflow so that, when a user provides one of a few fake colors, you can follow up with a custom response from your webhook.

To create a custom entity:

  1. In the Dialogflow Console left-navigation, click on Entities.
  2. Click Create entity and call it "fakeColor".
  3. Add the following entries and synonyms, then click Save:

  1. In the left-navigation, click on Intents.
  2. Click Create intent and call it "favorite fake color"
  1. Under Training phrases, type:

You should see the "fakeColor" parameter show up under Actions and parameters now that Dialogflow recognizes your custom entity.

  1. Under Fulfillment, turn on Enable webhook call for this intent.
  2. Click Save.

When a user selects one of the fake colors you've defined, your webhook will respond with basic cards that show each color.

To configure your webhook:

  1. On your device, navigate to the /level2/functions folder and open index.js in an editor.
  2. Replace this code:
const {
  dialogflow,
  Permission,
} = require('actions-on-google');

with this:

index.js

// Import the Dialogflow module and response creation dependencies
// from the Actions on Google client library.
const {
  dialogflow,
  BasicCard,
  Permission,
} = require('actions-on-google');
  1. Replace this code:
app.intent('favorite color', (conv, {color}) => {
 const luckyNumber = color.length;
 const audioSound = 'https://actions.google.com/sounds/v1/cartoon/clang_and_wobble.ogg';
 if (conv.data.userName) {
   // If we collected user name previously, address them by name and use SSML
   // to embed an audio snippet in the response.
   conv.close(`<speak>${conv.data.userName}, your lucky number is ` +
     `${luckyNumber}<audio src="${audioSound}"></audio>.`);
 } else {
   conv.close(`<speak>Your lucky number is ${luckyNumber}` +
     `<audio src="${audioSound}"></audio>.`);
 }
});

with this:

index.js

// Handle the Dialogflow intent named 'favorite color'.
// The intent collects a parameter named 'color'.
app.intent('favorite color', (conv, {color}) => {
 const luckyNumber = color.length;
 const audioSound = 'https://actions.google.com/sounds/v1/cartoon/clang_and_wobble.ogg';
 if (conv.data.userName) {
   // If we collected user name previously, address them by name and use SSML
   // to embed an audio snippet in the response.
   conv.ask(`<speak>${conv.data.userName}, your lucky number is ` +
     `${luckyNumber}.<audio src="${audioSound}"></audio>` +
     `Would you like to hear some fake colors?</speak>`);
 } else {
   conv.ask(`<speak>Your lucky number is ${luckyNumber}.` +
     `<audio src="${audioSound}"></audio>` +
     `Would you like to hear some fake colors?</speak>`);
 }
});
  1. Add the following code at the end of the file:

index.js

// Define a mapping of fake color strings to basic card objects.
const colorMap = {
'indigo taco': new BasicCard({
  title: 'Indigo Taco',
  image: {
    url: 'https://storage.googleapis.com/material-design/publish/material_v_12/assets/0BxFyKV4eeNjDN1JRbF9ZMHZsa1k/style-color-uiapplication-palette1.png',
    accessibilityText: 'Indigo Taco Color',
  },
  display: 'WHITE',
}),
'pink unicorn': new BasicCard({
  title: 'Pink Unicorn',
  image: {
    url: 'https://storage.googleapis.com/material-design/publish/material_v_12/assets/0BxFyKV4eeNjDbFVfTXpoaEE5Vzg/style-color-uiapplication-palette2.png',
    accessibilityText: 'Pink Unicorn Color',
  },
  display: 'WHITE',
}),
'blue grey coffee': new BasicCard({
  title: 'Blue Grey Coffee',
  image: {
    url: 'https://storage.googleapis.com/material-design/publish/material_v_12/assets/0BxFyKV4eeNjDZUdpeURtaTUwLUk/style-color-colorsystem-gray-secondary-161116.png',
    accessibilityText: 'Blue Grey Coffee Color',
  },
  display: 'WHITE',
}),
};

app.intent('favorite fake color', (conv, {fakeColor}) => {
  conv.close(`Here's the color`, colorMap[fakeColor]);
});

This new code performs two main tasks:

First, it sets up a mapping (colorMap) of color strings (e.g. "indigo taco", "pink unicorn", "blue grey coffee") to BasicCard objects. BasicCard is a client library class for constructing visual responses corresponding to the basic card type.

In the constructor calls, you pass configuration options relevant to each specific color, including:

Finally, you set a callback function for the ‘favorite fake color' intent, which uses the fakeColor option that the user selected to create a card corresponding to that fake color and present it to the user.

  1. In the terminal, run the following command to deploy your webhook to Firebase.
firebase deploy

Test your code

To test out your Action in the Actions console simulator:

  1. Type "Talk to my test app" into the Input field and hit enter.
  2. Type "yes".
  3. Type "blue".
  4. Type "sure".
  5. Type "tell me about the first one".

When you select a fake color, you should receive a response that includes a basic card, similar to Figure 2.

Congratulations!

You've now covered the intermediate skills necessary to build conversational user interfaces with Actions on Google.

What we've covered

What's next

You can explore these resources for learning about Actions on Google:

Follow us on Twitter @ActionsOnGoogle to stay tuned to our latest announcements, and tweet to #AoGDevs to share what you have built!

Feedback survey

Before you go, please fill out this form to let us know how we're doing!