Welcome to the Firebase App on Assistant codelab. In this codelab, you'll learn how to use Firebase, API.AI, and Google Assistant to create an App on Assistant.

What are you going to build in this codelab?

What you'll learn

What you'll need

How will you use this tutorial?

Read it through only Read it and complete the exercises

How would rate your experience with building web apps?

Novice Intermediate Proficient

What are Apps for Assistant?

Apps for Assistant are an exciting new way to interact with your users through the Google Assistant. They provide a conversational interface between you and your user, and they are super simple to build with API.AI. You can build all sorts of Assistant apps, from home automation to games and services.

What is Firebase?

Firebase is a unified mobile development platform that provides tools to help you build, grow and monetize your app. For this codelab, you won't build a mobile app, but you will use two of its features to provide the backend services used to power the app.

Firebase Realtime Database is a cloud-hosted NoSQL database that stores data as JSON. For this codelab, this will provide the data storage needed for the app to store its knowledge.

Cloud Functions for Firebase lets you run backend code on Google cloud infrastructure that responds to events in your Firebase project. For this codelab, you will create an HTTP endpoint serviced by Cloud Functions that responds to requests from API.AI.

What is API.AI?

API.AI provides a server-side infrastructure to create conversational scenarios, then build advanced dialogues to manage the conversation flow with the user.

What is Google Assistant?

At Google, we believe the future is artificial intelligence first. Artificial intelligence, is making computers "smart" so they can think on their own.

We have been investing heavily in the areas of:

These things come together in the Google Assistant. It allows you to have a conversation with Google.

Clone the GitHub repository from the command line.

$ git clone https://github.com/firebase/assistant-codelab.git

Create project

From Firebase home page click Console then click on Add Project.

Call the project animal-guesser, then click on Create Project.

Import bootstrap database contents from json

  1. Select Database from left-nav menu.
  2. Select Import JSON from the overflow menu at the top right
  3. Choose the database.json file from the project/ directory in the GitHub clone

Examine the database structure

Walk through the database structure in the console (effectively a binary tree) and how it will be used to power the app's knowledge.

Create a new API.AI project

  1. Log into the API.AI console.
  2. Follow the introduction tutorial for creating your first agent or click here to go straight into the create agent form.
  3. Fill in with name and description. We recommend:

Import some initial API.AI intent(s)

  1. Click ⚙ icon to go to agent settings.
  2. Open the Export and Import tab.

  1. Select Assistant.zip from the clone to import the initial "Intents."
  2. Create a new "Intent" called "Guess is Correct"

Do a test conversation (fulfilled by a static/dummy endpoint)

  1. In the left navigation pane (you may need to expand it via the "hamburger" in the upper left), click on Fulfillment.
  2. The Webhook should already be enabled; if it isn't, toggle the ENABLED switch.
  3. Replace or fill in the URL field with this URL:
  1. https://us-central1-assist-e48e7.cloudfunctions.net/assistantcodelab
  1. Don't forget to click Save!

Now we're ready to do a test conversation. We can do this straight from the API.AI console. In the pane on the right, type "begin" and hit enter. You should see a response from the welcome intent asking you if you want to play. Go ahead and type "yes" and try out the game!

Create your working space

Cloud Functions for Firebase gives you some tools to deploy JavaScript that you write on your computer into the Google cloud. Once deployed, it will run in a node.js environment when invoked. The instructions here will set up that space on your computer.

Download and install node.js and the Firebase CLI

If you do not already have these installed, download and install node.js and the Firebase CLI. Once installed, you should be able to run them from your command line like this to check their versions:

$ node --version
$ firebase --version

In general, you should always make sure to keep the Firebase CLI up to date with the following command:

$ npm install -g firebase-tools

Create and initialize your Cloud Functions workspace

Now, create a folder to hold your project:

$ mkdir firebase-assistant-codelab
$ cd firebase-assistant-codelab

To authenticate and get access to your existing project:

$ firebase login

You should see a browser window pop up asking for you to allow some permissions:

After you allow the permissions, initialize your project workspace:

$ firebase init

The Firebase CLI can deploy a few different types of things, but you're only going to be using Cloud Functions in this codelab. You can use the arrow keys and spacebar to deselect Hosting and Database rules:

Answer all the following questions using the default values, hitting enter whenever prompted:

Select the Firebase project you created earlier (note that there may be other projects listed here), navigating with the arrow keys if necessary to find it. Press the enter key to select the project. From there, accept any default responses.

You'll now have a "functions" directory ready to hold your code. There is a default index.js which is the entry point to your Cloud Functions code. Load it up in a code editor. You can overwrite the contents of this file by copying the following code in its place:

* Copyright 2017 Google Inc. All Rights Reserved.
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*      http://www.apache.org/licenses/LICENSE-2.0
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* See the License for the specific language governing permissions and
* limitations under the License.
'use strict';

process.env.DEBUG = 'actions-on-google:*';

const Assistant = require('actions-on-google').ApiAiAssistant;
const functions = require('firebase-functions');
const admin = require('firebase-admin');

const know = admin.database().ref('/animal-knowledge');
const graph = know.child('graph');

// API.AI Intent names
const PLAY_INTENT = 'play';
const NO_INTENT = 'discriminate-no';
const YES_INTENT = 'discriminate-yes';
const GIVEUP_INTENT = 'give-up';
const LEARN_THING_INTENT = 'learn-thing';
const LEARN_DISCRIM_INTENT = 'learn-discrimination';

// Contexts
const WELCOME_CONTEXT = 'welcome';
const QUESTION_CONTEXT = 'question';
const GUESS_CONTEXT = 'guess';
const LEARN_THING_CONTEXT = 'learn-thing';
const LEARN_DISCRIM_CONTEXT = 'learn-discrimination';
const ANSWER_CONTEXT = 'answer';

// Context Parameters
const ID_PARAM = 'id';
const BRANCH_PARAM = 'branch';
const LEARN_THING_PARAM = 'learn-thing';
const GUESSABLE_THING_PARAM = 'guessable-thing';
const LEARN_DISCRIMINATION_PARAM = 'learn-discrimination';
const ANSWER_PARAM = 'answer';
const QUESTION_PARAM = 'question';

exports.assistantcodelab = functions.https.onRequest((request, response) => {
   console.log('headers: ' + JSON.stringify(request.headers));
   console.log('body: ' + JSON.stringify(request.body));

   const assistant = new Assistant({request: request, response: response});

   let actionMap = new Map();
   actionMap.set(PLAY_INTENT, play);
   actionMap.set(NO_INTENT, discriminate);
   actionMap.set(YES_INTENT, discriminate);
   actionMap.set(GIVEUP_INTENT, giveUp);
   actionMap.set(LEARN_THING_INTENT, learnThing);
   actionMap.set(LEARN_DISCRIM_INTENT, learnDiscrimination);

   function play(assistant) {
       const first_ref = know.child('first');
       first_ref.once('value', snap => {
           const first = snap.val();
           console.log(`First: ${first}`);
           graph.child(first).once('value', snap => {
               const speech = `<speak>
Great! Think of an animal, but don't tell me what it is yet. <break time="3"/>
Okay, my first question is: ${snap.val().q}

               const parameters = {};
               parameters[ID_PARAM] = snap.key;
               assistant.setContext(QUESTION_CONTEXT, 5, parameters);

   function discriminate(assistant) {
       const priorQuestion = assistant.getContextArgument(QUESTION_CONTEXT, ID_PARAM).value;

       const intent = assistant.getIntent();
       let yes_no;
       if (YES_INTENT === intent) {
           yes_no = 'y';
       } else {
           yes_no = 'n';

       console.log(`prior question: ${priorQuestion}`);

       graph.child(priorQuestion).once('value', snap => {
           const next = snap.val()[yes_no];
           graph.child(next).once('value', snap => {
               const node = snap.val();
               if (node.q) {
                   const speech = node.q;

                   const parameters = {};
                   parameters[ID_PARAM] = snap.key;
                   assistant.setContext(QUESTION_CONTEXT, 5, parameters);
               } else {
                   const guess = node.a;
                   const speech = `Is it a ${guess}?`;

                   const parameters = {};
                   parameters[ID_PARAM] = snap.key;
                   parameters[BRANCH_PARAM] = yes_no;
                   assistant.setContext(GUESS_CONTEXT, 5, parameters);

   function giveUp(assistant) {
       const priorQuestion = assistant.getContextArgument(QUESTION_CONTEXT, ID_PARAM).value;
       const guess = assistant.getContextArgument(GUESS_CONTEXT, ID_PARAM).value;
       console.log(`Priorq: ${priorQuestion}, guess: ${guess}`);

       const speech = 'I give up!  What are you thinking of?';

       const parameters = {};
       parameters[LEARN_THING_PARAM] = true;
       assistant.setContext(LEARN_THING_CONTEXT, 2, parameters);

   function learnThing(assistant) {
       const priorQuestion = assistant.getContextArgument(QUESTION_CONTEXT, ID_PARAM).value;
       const guess = assistant.getContextArgument(GUESS_CONTEXT, ID_PARAM).value;
       const branch = assistant.getContextArgument(GUESS_CONTEXT, BRANCH_PARAM).value;
       const new_thing = assistant.getArgument(GUESSABLE_THING_PARAM);

       console.log(`Priorq: ${priorQuestion}, guess: ${guess}, branch: ${branch}, thing: ${new_thing}`);

       const q_promise = graph.child(priorQuestion).once('value');
       const g_promise = graph.child(guess).once('value');
       Promise.all([q_promise, g_promise]).then(results => {
           const q_snap = results[0];
           const g_snap = results[1];

           // TODO codelab-1: set the proper contexts to learn the differentiation
           const speech = `I don't know how to tell a ${new_thing} from a ${g_snap.val().a}!`;

   function learnDiscrimination(assistant) {
       const priorQuestion = assistant.getContextArgument(QUESTION_CONTEXT, ID_PARAM).value;
       const guess = assistant.getContextArgument(GUESS_CONTEXT, ID_PARAM).value;
       const branch = assistant.getContextArgument(GUESS_CONTEXT, BRANCH_PARAM).value;
       const answer =  assistant.getContextArgument(ANSWER_CONTEXT, ANSWER_PARAM).value;
       const question = assistant.getArgument(QUESTION_PARAM);

       console.log(`Priorq: ${priorQuestion}, answer: ${answer}, guess: ${guess}, branch: ${branch}, question: ${question}`);

       const a_node = graph.push({
           a: answer

       const q_node = graph.push({
           q: `${question}?`,
           y: a_node.key,
           n: guess

       let predicate = 'a';
       if (['a','e','i','o','u'].indexOf(answer.charAt(0)) != -1) {
           predicate = 'an';

       const update = {};
       update[branch] = q_node.key;
       graph.child(priorQuestion).update(update).then(() => {
           // TODO codelab-2: give the user an option to play again or end the conversation
           const speech = "Ok, thanks for the information!";

This function responds to HTTPS requests to a dedicated host for your project. You can see that work after you deploy the code.

This function also requires the Google Assistant node.js module, which needs to be installed into the project. Before deploying, you'll need up make sure the proper NPM modules are installed in the functions directory. Use npm to install the dependency in the functions/ directory:

$ cd functions
$ npm install --save actions-on-google

Deploy the Cloud Functions code

Every time you make changes to your functions, you will need to deploy that to the Google cloud with the following command:

$ firebase deploy

It can take some time to deploy - please be patient.

When the deploy is complete, the CLI will print a message to the console with the URL of the endpoint where your function will respond. Copy that URL from the terminal into a browser to access it.

This function isn't meant to be accessed by a browser, but at least we know it's available for queries. Instead, API.AI will use it to fulfill API requests made to it on behalf of the end user.

Configure API.AI project to point to Cloud Function endpoint

Now that you have a working endpoint, configure API.AI to use it for fulfillment. Once again, copy the URL of the endpoint from the CLI output into the API.AI project. To find the correct place to paste it:

  1. In the left navigation pane (you may need to expand it via the "hamburger" in the upper left), click on Fulfillment.
  2. The Webhook should already be enabled; if it isn't, toggle the ENABLED switch.
  3. Replace or fill in the URL field with the URL you copied after the completion of `firebase deploy`.
  4. Don't forget to click Save!

After you commit the change, you can now start a test conversation to see it work. As you test the app, you may see some XML tags come through (for example, <speak>...</speak>). We're using something called Speech Synthesis Markup Language (SSML) to add structure and pauses to the responses. The API.AI console doesn't support it, and so shows the raw markup. However, we'll see the effect of using SSML when we get to the Google Assistant integration.

Learn how to differentiate

If you tried to go all the way through the conversation, you may have noticed that it doesn't actually learn how to figure out the guess if it got it wrong. Since that defeats half the purpose of this Assistant App, let's teach it how to learn.

Search index.js for // TODO codelab-1 and replace it and the line following it with this code:

const speech = `
I need to know how to tell a ${new_thing} from a ${g_snap.val().a} using a yes-no question.
The answer must be "yes" for ${new_thing}. What question should I use?

const discrmParameters = {};
discrmParameters[LEARN_DISCRIMINATION_PARAM] = true;
assistant.setContext(LEARN_DISCRIM_CONTEXT, 2, discrmParameters);

const answerParameters = {};
answerParameters[ANSWER_PARAM] = new_thing;
assistant.setContext(ANSWER_CONTEXT, 2, answerParameters);



Since we already have the intents in the API.AI project set up properly, this ensures that the Cloud Function is setting up the proper contexts and parameters on those contexts. There are three things you should note in this code:

  1. Following the Actions on Google best practices, we change the App's prompt to the user to ensure that there's a clear expectation for what we want them to say next.
  2. We set two API.AI contexts, learn-discrimination and answer. The lifespan on these contexts is kept pretty short -- two -- to prevent this intent from firing again after the user gave us their question.
  3. For each of the contexts, we set parameters to carry forward into the next intent: learn-discrimination is set to keep us within the "learn differentiation" path and answer is set to carry forward the new animal the user wants us to learn about.

Test the addition

Now our Assistant App should be able to learn new things. Deploy the function again with firebase deploy and then run through the conversation again forcing it into the learning scenario:

Play the game

The only way to tell if our App is working well is, of course, to test it! Play through the game a few times with different outcomes and different responses. Instead of just answering yes, try "sure" or maybe something less definitive like "check." What happens?

Also pay attention to what happens at the end of the game. Does it feel like a natural conclusion to the game? If you were having a conversation, would it feel awkward?

Training and supporting multiple inputs

One of the benefits of API.AI is that it attempts to match approximate queries to your intents. That is, users aren't always going to say exactly what you expect them to and so the platform tries to learn and match user queries with intents -- even if you didn't specify that exact match! However, it works best when each intent has a wide variety of source material for "user says" queries.

Open up the "Discriminate Yes" intent and expand (if it isn't already) the User Says section. You'll see that we only support "Yes". No wonder it wasn't able to respond to something like "check"! Take a moment to fill in a wider variety of ways users could positively answer a question.

Completing the conversation naturally

Towards the end of the conversation, you probably noticed that it just ends with a simple "Ok, thanks for the information!" What should the user do next? Are we done?

In a real conversation, we might expect one of two things: either (1) some kind of terminator like "alright, bye now!" or (2) an offer to continue the conversation. Let's build our Assistant App to handle both.

Open up the Function again, index.js, and look for // TODO codelab-2. Replace it and the line following it with:

const speech = `<speak>
OK, thanks for the information! I'll remember to ask "${question}" to see if you're thinking of ${predicate} ${answer}.
<break time="1">
Would you like to play again?
assistant.setContext(WELCOME_CONTEXT, 1);

Next, go back to the API.AI console, and add a new intent:

We'll call this intent game-over and have it represent a user saying they do not want to play the game. Since there are multiple points in the app where a user can say "no", we'll distinguish this one by setting an input context of welcome.

As we did in the Discriminate Yes intent, you should add a wide range of variations on "I don't want to play again." In the interest of time for this codelab, you can just use that single phrase or upload the game-over.json file from the solution directory via the Upload Intent feature:

If you created the intent manually, scroll to the bottom of the intent, click on the "Actions on Google" header and make sure the "End Conversation" box is checked. This tells the Google Assistant that our app has nothing more for the user and closes the mic.

Until now, we've just been using API.AI's built in console for testing. You may have noticed that some things, like the SSML we used in the previous step, come through as raw code. Fortunately, the Google Assistant is able to handle it as you'd expect. This is also your first step into getting this Assistant Apps onto platforms like Google Home.

From the API.AI console in the left navigation menu, select "Integrations." Here, you'll be able to select from a wide variety of integrations with other platforms. For this codelab, we'll be looking at the Actions on Google integration. Enable that integration now.

Click your project to import. When import is successful, you can leave this page and go back to your API.AI console. Go back to your integration. This time you'll see your default intent selected. Click AUTHORIZE.

In the settings form that appears, leave everything as default and click TEST.

Once it creates the test, click View Test. It will take you to Google Assistant Simulator. Here you can write "use my test app" command to trigger the test app.

And now you're ready to test!

Once you're in the web simulator, try out a full chat again and notice how things like the SSML are actually parsed and how our Assistant App is invoked.

Thanks for taking the time to go through the Firebase Assistant Codelab! If you're interested in designing your own Assistant App, take a look through the Actions on Google documentation at https://developers.google.com/actions. In particular, check out our best practices within the Design Walkthrough and the Design Checklist. And finally, if you think your new agent is ready for the public, we've outlined guidance and the process on the Distribution page.