Actions on Google is the platform to build conversational apps that can be invoked through Google Assistant. This allows you to build a dialog with your users, using tools to process natural language input and return rich audio and visual responses. Apps built for the Assistant work across Assistant surfaces, including the Google Home, eligible Android and iOS devices, and more soon. These apps are built in the web, meaning they require no special hardware and can leverage any existing web infrastructure you already use.

In this codelab, you'll learn to deploy the Facts about Google sample app for Assistant using the Actions Console and personalize it with your own content. During the remaining time, you will work to expand the sample to introduce advanced responses, conversational best practices, and i18n features.

The Actions on Google platform is not dependent on Android at all and runs across Google Assistant devices through a web architecture. This means there is no Android development required.

What you'll learn

What you'll need

This initial part of the session covers the basic concepts of Actions on Google and apps for Assistant.

The Google Assistant — A conversation between you and Google that helps you get things done in your world.

Google Home/Mobile device — The surface to interact with the Assistant.

Actions on Google — How developers can extend the Assistant (via Assistant apps)

The Google Assistant is Google's personal assistant that works across Google Home and mobile devices, including iOS and Android. Actions on Google allows developers to build conversations with users through the Google Assistant. Conversations are invoked by the user, either by name or by implicit phrasing, and controlled entirely by the app thereafter.

Conversations are delivered through the Conversation reverse HTTP API. The app is hosted on any publicly accessible HTTPS endpoint which receives a POST request on invocation and any subsequent user utterances throughout the conversation. The dialog ends when either the app closes (most likely after the user indicates they are done) or when the user cancels the conversation explicitly.

Download the Facts about Google sample and deploy it through a new Actions Console project.

  1. Navigate to
  2. Click Clone or Download. Either Clone the sample directly using Git or download and unpack the zip.

The sample contains several important files.

Throughout this codelab, you will use two main consoles.

  1. Actions Console - Here you can manage your Actions on Google project. You can use this console to test the app, submit for publishing, This project will also be used as your Firebase project.
  2. Dialogflow Console - This is the console to create and edit a Dialogflow agent to handle conversation design. We will use this console to upload and edit the agent contained in the sample.

To deploy this sample:

  1. Click Add/Import Project (and give it any name you choose) in the Actions Console at
  2. Click on the Add Actions button.
  3. Click Build in the Dialogflow box to indicate that you want to build your app using Dialogflow rather than handling user utterances directly.

Dialogflow offers tools for

  1. Click Create Actions on Dialogflow (you may need to sign in and grant permissions to access your account):

  1. A new agent will be created for you in the Dialogflow console and linked to the same project used for the Actions Console. Click SAVE to enter the console.
  2. To upload the Facts about Google sample into Dialogflow, click the settings gear on the left pane of the Dialogflow console. You may need to open the left pane with the hamburger menu in the top left of the screen.
  3. Click the Export and Import tab and click Restore From Zip.
  4. Choose the "" file from the unpacked sample and follow the instructions to upload the agent.
  1. This includes typing "RESTORE" to confirm the restoration of the agent, as well as pressing RESTORE and DONE to finish the upload.

  1. Click SAVE and click INTENTS on the left panel.
  2. Click the downward facing arrows next to the choose_cats and choose_fact intents to reveal the entire conversational intent structure.

These intents show the structure of the app, mapping user utterances to intents using the following conversational structure.

The sample can be invoked using a predetermined trigger phrase (like "talk to") followed by the invocation name ("Facts about Google"). This will trigger the Default Welcome Intent in Dialogflow through the "WELCOME" event. Afterwards, the user can choose a fact type, either "History" or "Headquarters" to trigger the choose_fact intent. They can also speak "cats" to trigger the choose_cats intent. They can also shortcut directly to these intents at invocation by adding action phrases, e.g. "talk to Facts about Google about Google's history", "talk to Facts about Google about cats", "talk to Facts about Google about Google's HQ". From there, the Dialogflow agent enters an active context during which they can continue to say "yes" to hear more facts. The context maintains the state of the fact type and enables the tell_fact and tell_cat_fact intents to be triggered. Each of these four intents uses the web service to fetch a fact string through the Dialogflow Webhook API. Finally, when the user declares they're done, or denies hearing another fact, the quit_facts intent is triggered to end the conversation.

The Facts about Google sample covers a few key concepts in Actions on Google:

in this section, we'll use Cloud Functions for Firebase to deploy a webhook for the Dialogflow agent to provide custom responses.

The Facts about Google sample provides dynamic responses and custom behavior using a webhook, rather than static responses hardcoded into the Dialogflow agent through the console. This is done through a webhook, provided as part of the sample. You will deploy this webhook using Cloud Functions for Firebase, but fulfillment can be hosted on any publicly accessible HTTPS endpoint.

  1. Make sure you have the Firebase CLI installed and logged in (see prerequisites).
  2. Make sure to login, give account access permission, and initialize Cloud Functions functionality. See instructions here.
  3. Use "firebase use <project_id>" with your Actions Project ID. This can be fetched from the Project Settings page in the App Overview page of the Actions Console. Click the gear icon in the left pane to find this page.
  4. In a terminal or command line prompt, navigate to the functions/ directory of the sample.
  5. Run "npm install" to install needed dependencies.
  6. Run "firebase deploy --only functions". This step may take 60-90 seconds.
  7. Once the Cloud Functions is deployed, the Function URL will be printed in the Firebase CLI logs. This URL is separate from the Project Console URL.
  8. In the Dialogflow console, click Fulfillment in the left pane.
  9. Enable Webhook.
  10. Enter the Function URL from the Firebase CLI logs.
  11. Click SAVE.

Now that the app has been fully deployed, it is time to make it available for testing.

  1. In the Dialogflow Console, go to the Integrations page and click on Actions on Google to enter the Actions on Google Integrations settings. You may need to authorize access again, and you will see the following popup. You may need to click AUTHORIZE in the bottom right.

  1. Click TEST

This will enable the app for testing on both the Web Simulator and on any devices with the Google Assistant logged into the same account associated with the Actions Console project.

  1. To find the Web Simulator, click VIEW to return to the Actions Console.

  1. Invoke the app using the suggested phrase in the Input text box in the lower left, "Talk to my test app".
  2. You may need to turn on Voice & Audio Activity, Web & App Activity (including Chrome browsing history), and Device Information permissions for your Google Account at

Try testing the various intents and supported dialogs in the app. Play around with it, try to change the context and fact type (ask for cat facts), and look at differences in the behavior between surface types, like Google Homes or mobile phones. You can change the testing surface using the icons above the request logs.

In this section, you'll edit the sample on your computer to include facts about yourself. Make sure to only use facts you are willing to publish publicly.

First, edit some of the facts to see how the content of the app can be changed. For instance, look in the string.js file in the functions of the sample directory to find most of the dynamic content.

Try editing the Google related facts with modifications, such as changing the numbers or locations noted in the string

const categories = [
   "category": "headquarters",
   "suggestion": "Headquarters",
   "facts": [
     "Google's headquarters is in Mountain View, California.",
     "Google has over 30 cafeterias in its main campus.",
     "Google has over 10 fitness facilities in its main campus."
   "factPrefix": "Okay, here's a headquarters fact."
   "category": "history",
   "suggestion": "History",
   "facts": [
     "Google was founded in 1998.",
     "Google was founded by Larry Page and Sergey Brin.",
     "Google went public in 2004.",
     "Google has more than 70 offices in more than 40 countries."
   "factPrefix": "Sure, here's a history fact."

Once the changes are made, run "firebase deploy --only functions" again to redeploy the new webhook. When testing the app in the Console simulator, you should see updated fact data in the app.

Now you're ready to change the facts in the sample files altogether.

  1. Decide at least two categories of facts about yourself which you'd like to share.
  2. Replace the "headquarters" and "history" strings in the categories array in strings.js with your new fact categories.
  3. Change the suggestion value, since these values are referenced when displaying suggestion chips for the user on screen devices.
  4. Replace the facts strings with any number of related facts about yourself under the categories you chose. Don't forget the commas!
  5. Replace the factPrefix strings with values that make sense for your facts, and reflect a bit of your personality.
  6. Replace the heardItAll string in the content object to change the speech used when a user has exhausted all facts of a specific category.
  7. In the Dialogflow console, click Entities in the left panel
  8. Click into the fact-category entity.

  1. Replace "history" and "headquarters" with the new categories you've chosen, and feel free to add any synonyms which make sense. Try to think about how the user might ask you about your categories. For instance, when asking facts about Google's HQ, the user might say "headquarters" or "Google's headquarters" or "the Googleplex".
  2. Click SAVE when done.

These entity values are referenced in the "choose_fact" intent, meaning that intent will still be triggered with your new entity values. Most of the User Says phrases will still make sense for any new fact categories, but feel free to add or remove any depending on you expect the user to ask for facts about you.

  1. Finally, to make the app make sense, change the opening prompt of the Default Welcome Intent. Instead of listing options related to facts about Google, provide options for the categories you've chosen. This prompt should guide the user to speak a phrase similar to the ones listed in the User Says section of the choose_fact intent.

  1. Under the Actions on Google tab, change the Suggestion Chips text to match your new categories.
  2. Click SAVE when done.

  1. Redeploy the webhook using "firebase deploy --only functions". When testing the app, notice the new content.

You can take your customization to the next level by changing the accompanying rich media for the facts.

const content = {
 "images": [
     "Google app logo"
     "Google logo"
     "Stan the Dinosaur at Googleplex"
     "Biking at Googleplex"
 "link": ""

Replace the images with links to images of yourself or related interests. The images are chosen at random when displaying cards as part of responses on screen devices. Change the "link" text as well to link out to a personal website or other site of interest. This text is used in the "Learn more" button on cards shown on screen devices.

Now redeploy the webhook using "firebase deploy --only functions". When testing the app in the Actions Console simulator, notice the new content.

To fully update the app, you'll need to change fallback responses as well. These are used in various situations where user intent is unable to be matched by the Dialogflow agent. This needs to be changed in 3 places

  1. In the Dialogflow console, change the response given in the In Dialog Fallback intent

You'll also need to change the suggestion chips using under the Actions on Google tab.

  1. Similarly, change the response given in the Unrecognized Deep Link Fallback intent. This intent in particular is triggered when the app is triggered with an unsupported action phrase, like "Talk to Facts about Google about the color blue." Since the webhook is enabled for this intent, the response in the console is only used when the webhook fails for any reason.
  2. Change the unhandled text in the strings.js file. You can use the "%s" syntax to pivot from the user utterance to whatever fact categories you support. This is the actual text that should be displayed when the Unrecognized Deep Link Fallback intent is triggered. Remember to deploy with "firebase deploy --only functions".

Actions on Google is working to expand the languages and regions where apps for Assistant can be used. Here you'll learn how to customize the responses for local languages. Keep in mind this is a preliminary solution, and any real internationalization work should involve use of related libraries (such as Moment.js and Numeral.js) for localizing particular features of conversation, like times, dates, currencies, genders, etc.

In this section we'll localize responses for United Kingdom English (en-GB) and United States English (en-US). The approach we use, however, can be used to further support any locales supported by Actions on Google.

In order to localize responses, we'll use translated strings.js files.

  1. On your local copy of the app, in the functions directory, move the strings.js file to a new file in the same directory called strings_en-US.js.
  2. Create another copy of the same file in the same directory, and name it strings_en-GB.js. The strings in the second file will represent content to be used for U.K. English.
  3. In the strings_en-GB.js file, replace any strings accordingly for an en-GB audience, including translations and edits
  4. Make the following two edits to the index.js file in the functions directory.

The first edit will be to replace line 20

const strings = require('./strings');

with this

const DEFAULT_LOCALE = ‘en-US';
const localizedStrings = {
 'en-US': require('./strings_en-US.js'),
 'en-GB': require('./strings_en-GB.js')

Add the following at the start of each of the following functions in index.js

const strings = localizedStrings[app.getUserLocale()] || localizedStrings[DEFAULT_LOCALE];
  1. Redeploy the webhook using "firebase deploy --only functions".
  2. In the Simulator, use the Language dropdown to switch between English language locales and notice the difference in content.

Naturally, in conversation, users sometimes need to replay some information. This occurs in human dialog as well, often when some speech was not understood or the user missed some specific words.

In this section, we'll use a wrapping function to allow the user to repeat some speech at any time.

  1. In the Dialogflow console, click into the Intents screen from the left pane, and click "Create Intent"
  2. In the Intent name text box at the top next to the blue circle, name the intent "repeat_intent"
  3. Add User says phrases which might indicate the user wants to replay information, like
  1. "what?"
  2. "what was that?"
  3. "say that again?"
  4. "can you say that again?
  5. "can you repeat that?"
  6. "please repeat that"
  1. In the "Action" text box, enter "repeat". This will be the alias used by the Node.js client library to match the intent.
  2. Click "Fulfillment" at the bottom of the intent and Check "Use webhook". Remember to save your changes!
  3. On your local copy of the app, add two new functions, ask() and repeat() to index.js
const ask = (app, inputPrompt, noInputPrompts) => { = inputPrompt; = noInputPrompts;
 app.ask(inputPrompt, noInputPrompts);
const repeat = app => {
 if (! {
   ask(app, `Sorry, I didn't understand. Which type of fact would you like to hear?`);
 // Move SSML start tags for simple response over
 if (typeof === 'string') {
   let repeatPrefix = `Sure, here's that again.`;
   const ssmlPrefix = `<speak>`;
   if ( { =;
     repeatPrefix = ssmlPrefix + repeatPrefix;
   app.ask(repeatPrefix +,;
 } else {
  1. In all other code, replace all instances of app.ask(first_arg, optional_arg)with a call to ask(app, first_arg, optional_arg) which does not have the app prefix. Do not change the code in the new functions you just added (ask() and repeat()).
  2. Modify the actionMap variable to include the handler for the new intent.

Below this code

/** @type {Map<string, function(DialogflowApp): void>} */
const actionMap = new Map();
actionMap.set(Actions.UNRECOGNIZED_DEEP_LINK, unhandledDeepLinks);
actionMap.set(Actions.TELL_FACT, tellFact);
actionMap.set(Actions.TELL_CAT_FACT, tellCatFact);

Add this line

actionMap.set('repeat', repeat);
  1. Redeploy the webhook using "firebase deploy --only functions". When testing the app, try saying "what" and test the new behavior.

As part of the consistency of your conversational experience, one key is to greet users differently between their first invocation and subsequent invocations.

Find out more about this here.

Learn more about Best Practices and make your app even better. Then, submit your app for review and publishing!

Read more about Actions on Google at the developer site "Actions on Google Developers"

(!) Please feel free to join our "Actions on Google Developer Community" and follow us @Actions on Google