Actions on Google is a developer platform that lets you create software to extend the functionality of the Google Assistant, Google's virtual personal assistant, across more than 500 million devices, including smart speakers, phones, cars, TVs, headphones, and more. Users engage Google Assistant in conversation to get things done, like buying groceries or booking a ride (for a complete list of what's possible now, see the Actions directory.) As a developer, you can use Actions on Google to easily create and manage delightful and effective conversational experiences between users and your own 3rd-party service.
This codelab module is part of a multi-module tutorial. Each module can be taken standalone or in a learning sequence with other modules. In each module, you'll be provided with end-to-end instructions on how to build an Action from given software requirements. We'll also teach the necessary concepts and best practices for implementing Actions that give users high-quality conversational experiences.
This codelab covers intermediate level concepts for developing with Actions on Google. We strongly recommend that you familiarize yourself with the topics covered in Build Actions for the Google Assistant (Level 1) before starting this codelab.
In this codelab, you'll build a sophisticated conversational Action with multiple features:
The following tools must be in your environment:
Familiarity with JavaScript (ES6) is strongly recommended, although not required, to understand the webhook code used in this codelab.
You can optionally get the full project code for this codelab from our GitHub repository.
The Firebase Command Line Interface (CLI) will allow you to deploy your Actions project to Cloud Functions.
To install or upgrade the CLI run the following npm command:
npm -g install firebase-tools
To verify that the CLI has been installed correctly, open a terminal and run:
firebase --version
Make sure the version of the Firebase CLI is above 3.5.0 so that it has all the latest features required for Cloud Functions. If not, run npm install -g firebase-tools
to upgrade as shown above.
Authorize the Firebase CLI by running:
firebase login
In Build Actions for the Google Assistant (Level 1), we used the Dialogflow inline editor to quickly get you started on your first Actions project.
For this codelab, you're going to start with the Dialogflow intents from the Level 1 codelab, but develop and deploy the webhook locally on your machine using Cloud Functions for Firebase.
In contrast to using the Dialogflow inline editor, you can use a local machine, which gives you more control over your programming and deployment environment. This provides several advantages:
index.js
). To get the base files for this codelab, run the following command to clone the GitHub repository for the Level 1 codelab.
git clone https://github.com/actions-on-google/codelabs-nodejs
This repository contains the following important files:
level1-complete/functions/index.js
. The Javascript file that contains your webhook's fulfillment code. This is the main file that you'll be editing to add additional Actions and functionality.level1-complete/functions/package.json
. This file outlines dependencies and other metadata for this Node.js project. You can ignore this file for this codelab; you should only need to edit this file if you want to use different versions of the Actions on Google client library or other Node.js modules.level1-complete/codelab-level-one.zip
. This is the Dialogflow agent file for the Level 1 codelab. If you've already completed the Level 1 codelab, you can safely ignore this file.For the sake of clarity, we strongly recommend you rename the /level1-complete
directory name to /level2
. You can do so by using the mv
command in your terminal. For example:
$ cd codelabs-nodejs
$ mv ./level1-complete ./level2
In order to test the Action that you'll build for this codelab, you need to enable the necessary permissions.
Next, you'll need to set up the Actions project and the Dialogflow agent for your codelab.
If you've already completed the Build Actions for the Google Assistant (Level 1) codelab, do the following:
If you're starting from scratch, do the following:
codelab-level-one.zip
file.Now that your Actions project and Dialogflow agent are ready, do the following to deploy your local index.js
file using the Firebase Functions CLI:
/level2/functions
directory of your base files clone.firebase use <PROJECT_ID>
npm install
firebase deploy
After a few minutes, you should see "Deploy complete!" indicating that you've successfully deployed your webhook to Firebase.
You need to provide Dialogflow with the URL to the cloud function. To retrieve this URL, follow these steps:
Now you need to update your Dialogflow agent to use your webhook for fulfillment. To do so, follow these steps:
At this point, users can start a conversation by explicitly invoking your Action. Once users are mid-conversation, they can trigger the ‘favorite color' custom intent by providing a color. Dialogflow will parse the user's input to extract the information your fulfillment needs—namely, the color—and send this to your fulfillment. Your fulfillment then auto-generates a lucky number to send back to the user.
To test out your Action in the Actions simulator:
Your Actions project always has an invocation name, like "Google IO 18". When users say "Talk to Google IO 18", this triggers the Dialogflow welcome intent. Every Dialogflow agent has one welcome intent which acts as an entry point for users to start conversations.
Most of the time, users would rather jump to the specific task they want to accomplish than start at the beginning of the conversation every time. You can provide explicit deep links and implicit invocations as shortcuts into the conversation to help users get things done more efficiently.
Adding deep links and implicit invocations to your Actions is a simple, single-step process using the Google Assistant integration page in the Dialogflow console.
In your Actions project, you should have defined a custom Dialogflow intent called ‘favorite color' in an agent (this was covered in the Level 1 codelab). The agent parses your training phrases, like "I love yellow" and "Purple is my favorite," extracts the color parameter from each phrase, and makes it available to your fulfillment.
For this codelab, you're going to add the ‘favorite color' intent as an implicit invocation, meaning that users can invoke that intent and skip the welcome intent. Doing this also enables users to explicitly invoke the ‘favorite color' intent as a deep link (for example, "Hey Google, talk to my test app about blue"). The training phrases and parameters you defined for the ‘favorite color' intent enable Dialogflow to extract the color parameter when users invoke this deep link.
To add your intent for deep linking and implicit invocation, do the following:
The Assistant will now listen for users to provide a color in their invocation and extract the color parameter for your fulfillment.
To test out your deep link in the Actions simulator:
It's good practice to create a custom fallback intent to handle invocation phrases that don't provide the parameters you are looking for. For example, if instead of saying a color, the user might say something unexpected like "Talk to my test app about bananas". The term "bananas" would not fit into any of our Dialogflow intents, so we'd need to build a catch-all intent.
Since the Assistant now listens for any phrases which match the ‘favorite color' intent, you should provide a custom fallback intent specific for catching anything else.
To set up your custom fallback intent, do the following:
@sys.any
entity to tell Dialogflow to generalize the expression to any grammar (not just "banana"). Double-click on "banana" and filter for or select @sys.any
@sys.any
entity. You can safely ignore this for now. Click OK. (Generally, it's not advisable to use the @sys.any
entity, since it can overpower any other intent's speech biasing, but this is a special case where we ensure this will only be triggered at invocation time when other intents have not been matched.) To test out your custom fallback intent in the Actions simulator, type "Talk to my test app about banana" into the Input field and hit enter.
You can make your Actions more engaging and interactive by using personalized information from the user. To request access to user information, your webhook can use helper intents to obtain values with which to personalize your responses.
You can use the actions_intent_PERMISSION
helper intent to obtain the user's display name, with their permission. To use the permission helper intent:
/level2/functions
folder and open the index.js
file in any text editor in your local machine.const {dialogflow} = require('actions-on-google');
with this:
// Import the Dialogflow module and response creation dependencies from the
// Actions on Google client library.
const {
dialogflow,
Permission,
Suggestions,
} = require('actions-on-google');
Notice that you are importing the Suggestions
dependency; this lets your webhook include suggestion chips in your responses.
// Handle the Dialogflow intent named 'Default Welcome Intent'.
app.intent('Default Welcome Intent', (conv) => {
conv.ask(new Permission({
context: 'Hi there, to get to know you better',
permissions: 'NAME'
}));
});
Next, you'll need to update your webhook to handle the response. You'll use the user's information in your response if they granted permission and gracefully move the conversation forward regardless if permission was not granted.
To respond to the user:
index.js
:// Handle the Dialogflow intent named 'actions_intent_PERMISSION'. If user
// agreed to PERMISSION prompt, then boolean value 'permissionGranted' is true.
app.intent('actions_intent_PERMISSION', (conv, params, permissionGranted) => {
if (!permissionGranted) {
conv.ask(`Ok, no worries. What's your favorite color?`);
conv.ask(new Suggestions('Blue', 'Red', 'Green'));
} else {
conv.data.userName = conv.user.name.display;
conv.ask(`Thanks, ${conv.data.userName}. What's your favorite color?`);
conv.ask(new Suggestions('Blue', 'Red', 'Green'));
}
});
You register a callback function to handle the actions_intent_PERMISSION
intent you created earlier. In the callback, you first check whether the user granted permission to know their display name. The client library passes this argument to the callback function as the third parameter, here called permissionGranted.
The conv.user.name.display
value represents the user's display name sent to our webhook as part of the HTTP request body. If the user grants permission, you store the value of conv.user.name.display
in a property called userName
of the conv.data
object.
To provide additional hints to the user on how to continue the conversation, you call the Suggestions()
function to create suggestion chips that recommend some example colors. If the user is on a device with a screen, they can provide their input by tapping on a chip rather than by saying or typing their response.
The conv.data
object is a data structure provided by the client library for in-dialog storage. You can set and manipulate the properties on this object throughout the duration of the conversation for this user.
// Handle the Dialogflow intent named 'favorite color'.
// The intent collects a parameter named 'color'.
app.intent('favorite color', (conv, {color}) => {
const luckyNumber = color.length;
// Respond with the user's lucky number and end the conversation.
conv.close('Your lucky number is ' + luckyNumber);
});
with this:
// Handle the Dialogflow intent named 'favorite color'.
// The intent collects a parameter named 'color'
app.intent('favorite color', (conv, {color}) => {
const luckyNumber = color.length;
if (conv.data.userName) {
conv.close(`${conv.data.userName}, your lucky number is ${luckyNumber}.`);
} else {
conv.close(`Your lucky number is ${luckyNumber}.`);
}
});
Here you modify the callback function for the ‘favorite color' intent to use the userName
property to address the user by name. If the conv.data
object doesn't have a property called userName
(that is, the user previously denied permission to know their name, so the property was never set) then your webhook still responds, but without the user's name.
firebase deploy
To test out your Action in the Actions simulator:
You can embed SSML in your response strings to alter the sound of your spoken responses, or even embed sound effects or other audio clips.
The following shows an example of SSML markup:
<speak>
Mandy, your lucky number is 5.
<audio src="https://actions.google.com/sounds/v1/cartoon/clang_and_wobble.ogg"></audio>
</speak>
For this codelab, we'll use a sound clip from the Actions on Google sound library.
To add a sound effect to the ‘favorite color' response, do the following:
index.js
in an editor.// Handle the Dialogflow intent named 'favorite color'.
// The intent collects a parameter named 'color'
app.intent('favorite color', (conv, { color }) => {
const luckyNumber = color.length;
if (conv.data.userName) {
conv.close(`${conv.data.userName}, your lucky number is ` + `${luckyNumber}.`);
} else {
conv.close(`Your lucky number is ` + `${luckyNumber}.`);
}
});
with this:
// Handle the Dialogflow intent named 'favorite color'.
// The intent collects a parameter named 'color'
app.intent('favorite color', (conv, {color}) => {
const luckyNumber = color.length;
const audioSound = 'https://actions.google.com/sounds/v1/cartoon/clang_and_wobble.ogg';
if (conv.data.userName) {
// If we collected user name previously, address them by name and use SSML
// to embed an audio snippet in the response.
conv.close(`<speak>${conv.data.userName}, your lucky number is ` +
`${luckyNumber}.<audio src="${audioSound}"></audio></speak>`);
} else {
conv.close(`<speak>Your lucky number is ${luckyNumber}.` +
`<audio src="${audioSound}"></audio></speak>`);
}
});
Here, you declare an audioSound
variable containing the string URL for a statically hosted audio file on the web. You use the <speak>
SSML tags around the strings for the user response, indicating to the Google Assistant that your response should be parsed as SSML.
The <audio>
tag embedded in the string indicates that you want the Assistant to play some audio played at that point in the response. The src
attribute of that tag indicates where the audio is hosted.
firebase deploy
To test out your Action in the Actions simulator:
To keep the conversation going, you can add follow-up intents that will trigger based on the user's response after a particular intent. To add follow-up intents to ‘favorite color', do the following:
As you expand your conversational app, you can use custom entities to further deepen and personalize the conversation. We'll cover how to do this in this section.
So far, you've only been using built-in entities (@sys.color,
@sys.any
) to match user input. You're going to create a custom entity (also called a developer entity) in Dialogflow so that, when a user provides one of a few fake colors, you can follow up with a custom response from your webhook.
To create a custom entity:
You should see the "fakeColor" parameter show up under Actions and parameters now that Dialogflow recognizes your custom entity.
When a user selects one of the fake colors you've defined, your webhook will respond with basic cards that show each color.
To configure your webhook:
/level2/functions
folder and open index.js
in an editor.// Import the Dialogflow module and response creation dependencies from the
// Actions on Google client library.
const {
dialogflow,
Permission,
Suggestions,
} = require('actions-on-google');
with this:
// Import the Dialogflow module and response creation dependencies from the
// Actions on Google client library.
const {
dialogflow,
Permission,
Suggestions,
BasicCard,
} = require('actions-on-google');
// Handle the Dialogflow intent named 'favorite color'.
// The intent collects a parameter named 'color'
app.intent('favorite color', (conv, {color}) => {
const luckyNumber = color.length;
const audioSound = 'https://actions.google.com/sounds/v1/cartoon/clang_and_wobble.ogg';
if (conv.data.userName) {
// If we collected user name previously, address them by name and use SSML
// to embed an audio snippet in the response.
conv.close(`<speak>${conv.data.userName}, your lucky number is ` +
`${luckyNumber}<audio src="${audioSound}"></audio>.`);
} else {
conv.close(`<speak>Your lucky number is ${luckyNumber}` +
`<audio src="${audioSound}"></audio>.`);
}
});
with this:
// Handle the Dialogflow intent named 'favorite color'.
// The intent collects a parameter named 'color'.
app.intent('favorite color', (conv, {color}) => {
const luckyNumber = color.length;
const audioSound = 'https://actions.google.com/sounds/v1/cartoon/clang_and_wobble.ogg';
if (conv.data.userName) {
// If we collected user name previously, address them by name and use SSML
// to embed an audio snippet in the response.
conv.ask(`<speak>${conv.data.userName}, your lucky number is ` +
`${luckyNumber}.<audio src="${audioSound}"></audio> ` +
`Would you like to hear some fake colors?</speak>`);
conv.ask(new Suggestions('Yes', 'No'));
} else {
conv.ask(`<speak>Your lucky number is ${luckyNumber}.` +
`<audio src="${audioSound}"></audio> ` +
`Would you like to hear some fake colors?</speak>`);
conv.ask(new Suggestions('Yes', 'No'));
}
});
// Define a mapping of fake color strings to basic card objects.
const colorMap = {
'indigo taco': {
title: 'Indigo Taco',
text: 'Indigo Taco is a subtle bluish tone.',
image: {
url: 'https://storage.googleapis.com/material-design/publish/material_v_12/assets/0BxFyKV4eeNjDN1JRbF9ZMHZsa1k/style-color-uiapplication-palette1.png',
accessibilityText: 'Indigo Taco Color',
},
display: 'WHITE',
},
'pink unicorn': {
title: 'Pink Unicorn',
text: 'Pink Unicorn is an imaginative reddish hue.',
image: {
url: 'https://storage.googleapis.com/material-design/publish/material_v_12/assets/0BxFyKV4eeNjDbFVfTXpoaEE5Vzg/style-color-uiapplication-palette2.png',
accessibilityText: 'Pink Unicorn Color',
},
display: 'WHITE',
},
'blue grey coffee': {
title: 'Blue Grey Coffee',
text: 'Calling out to rainy days, Blue Grey Coffee brings to mind your favorite coffee shop.',
image: {
url: 'https://storage.googleapis.com/material-design/publish/material_v_12/assets/0BxFyKV4eeNjDZUdpeURtaTUwLUk/style-color-colorsystem-gray-secondary-161116.png',
accessibilityText: 'Blue Grey Coffee Color',
},
display: 'WHITE',
},
};
// Handle the Dialogflow intent named 'favorite fake color'.
// The intent collects a parameter named 'fakeColor'.
app.intent('favorite fake color', (conv, {fakeColor}) => {
// Present user with the corresponding basic card and end the conversation.
conv.close(`Here's the color`, new BasicCard(colorMap[fakeColor]));
});
This new code performs two main tasks:
First, it sets up a mapping (colorMap
) of color strings (e.g. "indigo taco", "pink unicorn", "blue grey coffee") to the content needed for BasicCard
objects. BasicCard
is a client library class for constructing visual responses corresponding to the basic card type.
In the constructor calls, you pass configuration options relevant to each specific color, including:
Finally, you set a callback function for the ‘favorite fake color' intent, which uses the fakeColor
option that the user selected to create a card corresponding to that fake color and present it to the user.
firebase deploy
To test out your Action in the Actions simulator:
When you select a fake color, you should receive a response that includes a basic card, similar to Figure 2.
Congratulations!
You've now covered the intermediate skills necessary to build conversational user interfaces with Actions on Google.
In the next codelab, you'll make further refinements to your Actions project. You'll learn more about conversational design, how to handle user silence, and how to present users with a visual selection response on devices with supported screens.
You can explore these resources for learning about Actions on Google:
Follow us on Twitter @ActionsOnGoogle to stay tuned to our latest announcements, and tweet to #AoGDevs to share what you have built!
Before you go, please fill out this form to let us know how we're doing!