Actions on Google is a developer platform that lets you create software to extend the functionality of Google Assistant, Google's virtual personal assistant, across more than 500 million devices, including smart speakers, phones, cars, TVs, headphones, and more. Users engage Assistant in conversation to get things done, like buying groceries or booking a ride. (for a complete list of what's possible, see the Assistant directory.) As a developer, you can use Actions on Google to easily create and manage delightful and effective conversational experiences between users and your third-party service.

What you'll build

In this codelab, you'll refine a Conversational Action so that it:

The following screenshots show an example of the conversational flow with the Action that you'll build:

What you'll learn

What you'll need

The following tools must be in your environment:

Familiarity with JavaScript (ES6) is also strongly recommended, although not required, to understand the webhook code that you'll use.

Optional: Get the sample code

You can optionally get the full project code for this codelab from the GitHub repository.

The Firebase command-line interface allows you to deploy your Actions project to Cloud Functions.

To install or upgrade the command-line interface, run the following npm command:

npm install -g firebase-tools

To verify that the command-line interface has been installed correctly, open a terminal and run the following command:

firebase --version

Make sure the version of the Firebase command-line interface is above 3.5.0 so it has all the latest features required for Cloud Functions. If it's not, run npm install -g firebase-tools to upgrade.

Authorize the Firebase command-line interface by running the following command:

firebase login

For this codelab, you'll start where the Level 2 codelab ended.

Download your base files

If you don't have the codelab cloned locally, run the following command to clone the GitHub repository for the codelab:

git clone https://github.com/actions-on-google/codelabs-nodejs

For the sake of clarity, rename the /level2-complete directory name to /level3. You can do so by using the mv command in your terminal, as shown below:

$ cd codelabs-nodejs
$ mv ./level2-complete ./level3

Set up your project and agent

Do the following:

  1. Open the Actions console.
  2. Click New project.
  3. Type in a Project name, like "actions-codelab-3." This name is for your internal reference. Later, you can set an external name for your project. Click Create Project.
  4. Rather than pick a category, scroll down to the More options section and click the Conversational card.
  5. Click Build your Action to expand the options and select Add Action(s).
  6. Click Add your first Action.
  7. On the Create Action dialog, select Custom Intent, then click Build to launch the Dialogflow console.
  8. In the Dialogflow console's Create Agent page, click Create.
  9. Click the in the left navigation bar.
  10. Click Export and Import.
  11. Click Restore From Zip.
  12. Upload the codelab-level-two.zip file from the /level3 directory you created earlier.
  13. Type "RESTORE" and click Restore.
  14. Click Done.

Deploy your fulfillment

Now that your Actions project and Dialogflow agent are ready, do the following to deploy your local index.js file using the command-line interface:

  1. In a terminal, navigate to the /level3/functions directory of your base files clone.
  2. Using the Actions project ID, run the following command:
firebase use <PROJECT_ID>
  1. Run the following command in the terminal to install dependencies:
npm install
  1. Run the following command in the terminal to deploy your webhook code to Firebase:
firebase deploy

After a few minutes, you should see a message that says, "Deploy complete!" It indicates that you deployed your webhook to Firebase.

Retrieve the deployment URL

You need to provide Dialogflow with the URL to the cloud function. To retrieve the URL, follow these steps:

  1. Open the Firebase Console.
  2. Select your Actions project from the list of options.
  3. Navigate to Develop > Functions in the left navigation bar. If you're prompted to "Choose data sharing settings," then click Do this later.
  4. Under the Dashboard tab, you should see an entry for "dialogflowFirebaseFulfillment" with a URL under Trigger. Copy the URL.

Set the URL in Dialogflow

Now, you need to update your Dialogflow agent to use your webhook for fulfillment. To do so, follow these steps:

  1. Open the Dialogflow console. (you can close the Firebase console if you'd like.)
  2. Click Fulfillment in the left navigation bar.
  3. Enable Webhook.
  4. Paste the URL you copied from the Firebase dashboard if it doesn't already appear.
  5. Click Save.

Verify your project is correctly set up

At this point, users can start a conversation by explicitly invoking your Action. Your fulfillment first uses the actions_intent_PERMISSION helper intent to obtain the user's display name with permission. Once users are mid-conversation, they can trigger the "favorite color" intent by providing a color. Then, they receive a lucky number with a sound effect. Lastly, they can provide a "favorite fake color" that matches the "fakeColor" custom entity and receive a basic card in response.

To test out your Action in the Actions simulator, do the following:

  1. In the Dialogflow console left navigation bar, click on Integrations > Google Assistant.
  2. Make sure Auto-preview changes is enabled, then click Test to update your Actions project.
  3. The Actions simulator loads your Actions project. To test your Action, type "Talk to my test app" in the Input field and press enter.
  4. You should see a message that says, "Hi there, to get to know you better, I'll just need to get your name from Google. Is that ok?"
  5. Type "yes" and press enter.
  6. You should see a response with your Google Account name. Try following the prompts to continue the conversation while making sure that your fulfillment has a response for each input.

Before going further, take a moment to consider the first step when building any Action—writing sample dialogs.

Before you start coding or even writing conversational flows, take the time to write (and say aloud) sample interactions between your user and Action. Write the interactions for "happy paths" when the conversation is progressing as expected, but also write them for when things go wrong, such as when the user provides some unexpected input. You can take a look at this blog post for more tips about writing sample dialogs.

Write the sample dialogs at the beginning of the development lifecycle (before you start coding), then iterate on your design as you add new conversational paths to your Action.

Here's an example of how you might represent the "happy path" for this codelab as a sample dialog:

User:

"Talk to my test app."

Action:

"Hi there. To get to know you better, I'll need to get your name from Google. Is that ok?"

User:

"Yes."

Action:

"Thanks, <name>. What's your favorite color?"

User:

"It's...umm...it's green!"

Action:

"Your lucky number is 5. Would you like to hear some fake colors?"

User:

"Yes."

Action:

"Which color, indigo taco, pink unicorn, or blue grey coffee?"

User:

"Pink unicorn."

Action:

"Here's the color." <shows image of color>

Here's a visual representation of that flow:

Exercise: Write sample dialog

As a short exercise, take a minute to think about how you might write the sample dialog to model other interactions when the user provides unexpected responses to the Action's prompts or if the user is silent.

Here's an example of such a dialog:

User:

"Talk to my test app."

Action:

"Hi there. To get to know you better, I'll need to get your name from Google. Is that ok?"

User:

"Nope."

Action:

"OK, no worries. What's your favorite color?"

User:

"It's... umm...it's a boat!"

Action:

"Sorry, what was that?"

User:

"I mean green."

Action:

"Your lucky number is 5. Would you like to hear some fake colors?"

User:

"Yes."

Action:

"Which color, indigo taco, pink unicorn, or blue grey coffee?"

User:

<silence>

Action:

"Which color would you like to hear about?"

User:

"Pink unicorn."

Action:

"Here's the color." <shows image of color> "Would you like to hear about another fake color?"

User:

"No."

Action:

"Let me know when you want to talk about colors again!"

It's often helpful for your Action to store data between conversation sessions with the same user. Your Action can ask users for their preferences and remember them for later use, which lets you personalize future conversations with that user. For example, an Action that gives users a weather report based on a zip code could ask users whether they'd like the Action to remember their zip code for later conversations.

The conv.user.storage object is a data structure provided by the Actions on Google Node.js client library for saving data across conversations for a particular user. In this section, you'll use the feature to cheerfully greet the user by name whenever they start a new conversation with your Action.

Implement the fulfillment

Open your index.js file in an editor and replace all instances of conv.data with conv.user.storage.

Update your default welcome intent handler to use the conv.user.storage object by replacing this code:

index.js

// Handle the Dialogflow intent named 'Default Welcome Intent'.
app.intent('Default Welcome Intent', (conv) => {
 // Asks the user's permission to know their name, for personalization.
 conv.ask(new Permission({
   context: 'Hi there, to get to know you better',
   permissions: 'NAME',
 }));
});

with this code:

index.js

// Handle the Dialogflow intent named 'Default Welcome Intent'.
app.intent('Default Welcome Intent', (conv) => {
 const name = conv.user.storage.userName;
 if (!name) {
   // Asks the user's permission to know their name, for personalization.
   conv.ask(new Permission({
     context: 'Hi there, to get to know you better',
     permissions: 'NAME',
   }));
 } else {
   conv.ask(`Hi again, ${name}. What's your favorite color?`);
 }
});

Test your conversation data storage

In the terminal, run the following command to deploy your updated webhook code to Firebase:

firebase deploy

To test out your Action in the Actions simulator, do the following:

  1. In the Actions console, navigate to Test.
  2. Type "Talk to my test app" in the Input field and press enter.
  3. Type "Yes" and press enter.
  4. Type "Cancel" and press enter.
  5. Type "Talk to my test app" again in the Input field and press enter to start another conversation.

At the start of the second conversation, your Action should remember your name from the first time that you granted permission.

On smart speakers or other surfaces without a screen, there may not always be an obvious visual indicator of whether the device is waiting for a user response. Users may not realize your Action is waiting for them to respond, so it's an important design practice to implement no-input event handling to remind users that they need to respond.

Set up Dialogflow

  1. Set up a new intent to handle the no-input event. In the Dialogflow console, click + next to Intents in the left navigation bar to create a new intent.
  2. You can name the new intent whatever you'd like. In the example, it's named actions_intent_NO_INPUT.
  3. Under Events, add a new event called actions_intent_NO_INPUT.
  4. Toggle the webhook fulfillment switch and click Save.

Implement the fulfillment

Open your index.js file in an editor and add the following code:

index.js

// Handle the Dialogflow NO_INPUT intent.
// Triggered when the user doesn't provide input to the Action
app.intent('actions_intent_NO_INPUT', (conv) => {
  // Use the number of reprompts to vary response
  const repromptCount = parseInt(conv.arguments.get('REPROMPT_COUNT'));
  if (repromptCount === 0) {
    conv.ask('Which color would you like to hear about?');
  } else if (repromptCount === 1) {
    conv.ask(`Please say the name of a color.`);
  } else if (conv.arguments.get('IS_FINAL_REPROMPT')) {
    conv.close(`Sorry we're having trouble. Let's ` +
      `try this again later. Goodbye.`);
  }
});

Notice that you took advantage of a property of the conversation object called the REPROMPT_COUNT. The value lets us you know how many times the user has been prompted so that you can modify your message each time. In the code snippet, the maximum reprompt count is set at two, at which point the conversation ends. That's a best practice, as prompting the user more than three times can increase frustration and stall the conversation.

Test your custom reprompts

In the terminal, run the following command to deploy your updated webhook code to Firebase:

firebase deploy

To test your custom reprompt in the Actions simulator, follow these steps:

  1. In the Actions console, navigate to Test.
  2. Make sure to end any conversations in progress; then, under Surface, select the Speaker icon:

  1. Type "Talk to my test app" in the Input field and press enter. If your Action doesn't remember your name, type "yes" and press enter.
  2. Click No Input to the right of the Input field to simulate a nonresponse.

Your Action should respond with a custom reprompt message every time that you simulate a nonresponse instead of entering a color, eventually exiting after the third reprompt.

Your Action should allow users to quickly bow out of conversations, even if they haven't followed the conversation path all the way through. By default, Actions on Google exits the conversation and plays an earcon whenever the user utters "exit," "cancel," "stop," "nevermind," or "goodbye."

You can customize that behavior by registering for the actions_intent_CANCEL event in Dialogflow and defining a custom response.

In this section, you'll create a new cancel intent in Dialogflow and add a suitable final response message.

Set up Dialogflow

  1. Set up a new intent for handling the user exiting. In the Dialogflow console, click the + button next to Intents in the left navigation bar to create a new intent.
  2. You can name this new intent whatever you'd like. In the example, it's named actions_intent_CANCEL.
  3. Under Events, add a new event called actions_intent_CANCEL.
  4. Under Responses, add a Text response like, "Let me know when you want to talk about colors again!" Note that a good design practice is to keep exit text responses shorter than 60 characters.
  1. Toggle the Set this intent as end of conversation switch under Add Responses.
  2. Click Save.

Test your custom exit

To test your custom exit prompt in the Actions simulator, follow these steps:

  1. In the Actions console, navigate to Test.
  2. Type "Talk to my test app" in the Input field and press enter. If your Action doesn't remember your name, type "yes."
  3. Type "Goodbye" and press enter.

Your Action should respond with your custom exit prompt and end the conversation.

In this section, you'll enhance your Action by adding the ability for users to view and select a fake color option on devices with screen output.

Design the conversational experience

It's important to design conversational experiences to be multi-modal, which means that users can participate via voice and text, as well as other interaction modes that their devices support (for example, touchscreen).

Always start with designing the conversation and writing sample dialogs for the voice-only experience. Then, design the multi-modal experience, which involves adding visuals as enhancements where it makes sense.

For devices with screen output, the Actions on Google platform provides several types of visual components that you can optionally integrate into your Action to provide detailed information to users.

One common use case for adding multimodal support is when users need to make a choice between several available options during the conversation.

In your conversation design, there's a decision point in the flow where the user needs to pick a fake color. You'll enhance this interaction by adding a visual component.

A good candidate for representing choices visually is the carousel. The component lets your Action present a selection of items for users to pick, where each item is easily differentiated by an image.

Set up Dialogflow

Make the following changes in the Dialogflow console to add the carousel.

Enable webhooks calls for the follow-up intent

When your favorite color - yes follow-up intent is matched, the user is provided with the carousel, which is a visual element. As a best practice, you should check that the user's current device has a screen before presenting visual elements. You'll update your favorite color - yes follow-up intent to perform that check.

  1. In the left navigation bar of the Dialogflow console, click Intents.
  2. Click the arrow next to the favorite color intent and select favorite color - yes.
  3. At the bottom of the page, under the Fulfillment section, toggle the Enable webhook call for this intent option.

  1. Click Save at the top of the page.

Update intents for handling visual selection

You'll need to update the favorite fake color intent in the Dialogflow console to handle the user's selection. To do so, follow these steps:

  1. In the Dialogflow console left navigation bar, click on Intents and select the favorite fake color intent.
  2. Under Events, add actions_intent_OPTION. Dialogflow will look for that specific event when a user selects an option from the carousel.

  1. Click Save at the top of the page.

Implement the fulfillment

To implement the fulfillment in your webhook, perform the following steps.

Load dependencies

To support the multi-modal conversation experience, you need to provide variable responses based on the surface capabilities of the device. You do that by checking the conv.screen property in your fulfillment.

In the index.js file, update the require() function to add the Carousel and Image packages from the actions-on-google package so that your imports look like this:

index.js

// Import the Dialogflow module and response creation dependencies
// from the Actions on Google client library.
const {
  dialogflow,
  BasicCard,
  Permission,
  Suggestions,
  Carousel,
  Image,
} = require('actions-on-google');

Build the carousel

Next, define the fakeColorCarousel() function to build the carousel.

In the index.js file, add a fakeColorCarousel() function with the following code:

index.js

// In the case the user is interacting with the Action on a screened device
// The Fake Color Carousel will display a carousel of color cards
const fakeColorCarousel = () => {
  const carousel = new Carousel({
   items: {
     'indigo taco': {
       title: 'Indigo Taco',
       synonyms: ['indigo', 'taco'],
       image: new Image({
         url: 'https://storage.googleapis.com/material-design/publish/material_v_12/assets/0BxFyKV4eeNjDN1JRbF9ZMHZsa1k/style-color-uiapplication-palette1.png',
         alt: 'Indigo Taco Color',
       }),
     },
     'pink unicorn': {
       title: 'Pink Unicorn',
       synonyms: ['pink', 'unicorn'],
       image: new Image({
         url: 'https://storage.googleapis.com/material-design/publish/material_v_12/assets/0BxFyKV4eeNjDbFVfTXpoaEE5Vzg/style-color-uiapplication-palette2.png',
         alt: 'Pink Unicorn Color',
       }),
     },
     'blue grey coffee': {
       title: 'Blue Grey Coffee',
       synonyms: ['blue', 'grey', 'coffee'],
       image: new Image({
         url: 'https://storage.googleapis.com/material-design/publish/material_v_12/assets/0BxFyKV4eeNjDZUdpeURtaTUwLUk/style-color-colorsystem-gray-secondary-161116.png',
         alt: 'Blue Grey Coffee Color',
       }),
     },
 }});
 return carousel;
};

Notice that carousels are built using the Items object, which has several properties, including titles and Images. The Image type contains a URL, which opens when the user clicks on the selection, as well as alternative text for accessibility.

To identify which carousel card the user selected,l use the keys of the Items object—namely, "indigo taco," "pink unicorn," or "blue grey coffee."

Add the intent handler for ‘favorite fake color - yes'

Next, you need to add a handler for the favorite color - yes follow-up intent to check if the conv.screen property is true. If so, thart indicates that the device has a screen. You can then send a response asking the user to select a fake color from the carousel by calling the ask() function with fakeColorCarousel(), which you pass as the argument.

In the index.js file, add a check for a screen on the current surface by adding the following code to your fulfillment:

index.js

// Handle the Dialogflow intent named 'favorite color - yes'
app.intent('favorite color - yes', (conv) => {
 conv.ask('Which color, indigo taco, pink unicorn or blue grey coffee?');
 // If the user is using a screened device, display the carousel
 if (conv.screen) return conv.ask(fakeColorCarousel());
});

Support non-screened devices

If the surface capability check returned false, then your user is interacting with your Action on a device that doesn't have a screen. You should support as many different users as possible with your Action, so you're now going to add an alternate response that reads the color's description instead of displaying a visual element.

In the index.js file, add a screen capability check and fallback by replacing the following code:

index.js

// Handle the Dialogflow intent named 'favorite fake color'.
// The intent collects a parameter named 'fakeColor'.
app.intent('favorite fake color', (conv, {fakeColor}) => {
 // Present user with the corresponding basic card and end the conversation.
 conv.close(`Here's the color`, new BasicCard(colorMap[fakeColor]));
});

with this code:

index.js

// Handle the Dialogflow intent named 'favorite fake color'.
// The intent collects a parameter named 'fakeColor'.
app.intent('favorite fake color', (conv, {fakeColor}) => {
 fakeColor = conv.arguments.get('OPTION') || fakeColor;
 // Present user with the corresponding basic card and end the conversation.
 conv.ask(`Here's the color.`, new BasicCard(colorMap[fakeColor]));
 if (!conv.screen) {
   conv.ask(colorMap[fakeColor].text);
 }
});

Test your carousel response

In the terminal, run the following command to deploy your updated webhook code to Firebase:

firebase deploy

To test your carousel response in the Actions simulator, follow these steps:

  1. In the Actions console, navigate to Test.
  2. Under Surface, select Smart Display.

  1. Type "Talk to my test app" in the Input field and press enter. If your Action doesn't remember your name, then type "yes" and press enter.
  2. Type "blue" and press enter.f
  3. Type "sure" and press enter.

You should see your carousel response appear under the Display tab on the right.

You can either type an option in the simulator or click on one of the carousel options to receive a card with more details about that color.

Test your nonvisual response

You should also test your response to see how it renders on a device without the screen capability. To test your response on a voice-only surface, follow these steps:

  1. Press the X in the upper-left corner of the simulator to exit the previous conversation.
  2. Under Surface, select Speaker.

  1. Type "Talk to my test app" in the Input field and press enter. If your Action doesn't remember your name, then type "yes" and press enter.
  2. Type "blue" and press enter.
  3. Type "sure" and press enter.
  4. Type "indigo taco" and press enter.

You should get a spoken response with a description corresponding to the color that you picked.

Your Action presents users with a multiple-choice question ("Which color, indigo taco, pink unicorn or blue grey coffee?") at the end of the conversation. Users should be able to see the other options they could have picked without having to again invoke your Action and navigate through your conversation to the decision point.

Design the conversational experience

In this section, you'll create prompts that let a user choose to either pick another color or gracefully end the conversation.

Here's an example sample dialog for the interaction scenario where the user wants to pick another fake color:

Action:

"Would you like to hear some fake colors?"

User:

"Yes."

Action:

"Which color, indigo taco, pink unicorn, or blue grey coffee?"

User:

"I like pink unicorn."

Action:

"Here's the color. Do you want to hear about another fake color?"

User:

"Yes please."

Action:

"Which color, indigo taco, pink unicorn, or blue grey coffee?"

Here's an example in which the user declines to pick another fake color:

Action:

"Would you like to hear some fake colors?"

User:

"Yes"

Action:

"Which color, indigo taco, pink unicorn, or blue grey coffee?"

User:

"I like pink unicorn."

Action:

"Here's the color. Do you want to hear about another fake color?"

User:

"No thanks."

Action:

"Goodbye, see you next time!"

Here's a visual representation of those sample dialogs:

To implement that flow, use follow-up intents that Dialogflow matches based on the user's response after a particular intent. In your Action, you'll apply follow-up intents in the following way:

When using follow-up intents, your Action needs to be aware of the conversational context. That is, it needs to understand the statements leading up to a certain point in the conversation. Unless the user changes the subject, you can assume that the thread of conversation continues. Therefore, it's likely that your Action can use a user's previous utterances to resolve ambiguities and better understand their current utterances. For example, a flower ordering Action should understand that the user query "What about a half dozen?" is a follow-up to the user's previous utterance and interpret it as "How much does a bouquet of six roses cost?"

Set up Dialogflow

To follow your carousel selection with additional prompts, do the following:

  1. In the Dialogflow console left navigation bar, click on Intents.
  2. Hover your cursor over favorite fake color, then click Add follow-up intent. Do that twice, once selecting yes and again selecting no.

Click on the favorite fake color - no intent, and do the following:

  1. Under Responses, add "Goodbye, see you next time!" as a Text response.
  2. Turn on Set this intent as end of conversation.
  3. Click Save.

Click on Intents in the left navigation bar and click on the favorite fake color - yes intent. Then, do the following:

  1. Under Fulfillment, turn on Enable webhook call for this intent.
  2. Click Save.

Implement fulfillment

Next, you'll need to add a handler for the favorite fake color - yes follow-up intent.

In the index.js file, replace the following code:

index.js

// Handle the Dialogflow intent named 'favorite color - yes'
app.intent('favorite color - yes', (conv) => {
 conv.ask('Which color, indigo taco, pink unicorn or blue grey coffee?');
 // If the user is using a screened device, display the carousel
 if (conv.screen) return conv.ask(fakeColorCarousel());
});

with this code:

index.js

// Handle the Dialogflow follow-up intents
app.intent(['favorite color - yes', 'favorite fake color - yes'], (conv) => {
 conv.ask('Which color, indigo taco, pink unicorn or blue grey coffee?');
 // If the user is using a screened device, display the carousel
 if (conv.screen) return conv.ask(fakeColorCarousel());
});

Lastly, you'll add suggestion chips to the favorite fake color intent handler that trigger your two new follow-up intents.

In the index.js file, update the favorite fake color intent handler with suggestion chips by replacing the following code:

index.js

// Handle the Dialogflow intent named 'favorite fake color'.
// The intent collects a parameter named 'fakeColor'.
app.intent('favorite fake color', (conv, {fakeColor}) => {
 fakeColor = conv.arguments.get('OPTION') || fakeColor;
 // Present user with the corresponding basic card and end the conversation.
 conv.ask(`Here's the color.`, new BasicCard(colorMap[fakeColor]));
 if (!conv.screen) {
   conv.ask(colorMap[fakeColor].text);
 }
});

with this code:

index.js

// Handle the Dialogflow intent named 'favorite fake color'.
// The intent collects a parameter named 'fakeColor'.
app.intent('favorite fake color', (conv, {fakeColor}) => {
  fakeColor = conv.arguments.get('OPTION') || fakeColor;
  // Present user with the corresponding basic card and end the conversation.
  if (!conv.screen) {
    conv.ask(colorMap[fakeColor].text);
  } else {
    conv.ask(`Here you go.`, new BasicCard(colorMap[fakeColor]));
  }
  conv.ask('Do you want to hear about another fake color?');
  conv.ask(new Suggestions('Yes', 'No'));
});

Test your carousel's follow-up prompt

In the terminal, run the following command to deploy your updated webhook code to Firebase:

firebase deploy

To test your follow-up prompt in the Actions simulator, do the following:

  1. In the Actions console, navigate to Tes.
  2. Under Surface, select Smart Display.

  1. Type "Talk to my test app" in the Input field and hit enter. If your Action doesn't remember your name, then type "yes."
  2. Type "blue."
  3. Type "yes."
  4. Click one of the carousel options. You should be asked whether you want to pick another color, and see suggestion chips under the basic card that say Yes and No.

Clicking on the Yes chip should show you the carousel again and the No chip should exit the conversation with a friendly message.

You should also test your response to see how it handles being run on a device without the screen capability. To test your response on a different surface, do the following:

  1. Type or click "No" to exit the previous conversation.
  2. Under Surface, select the Speaker icon.

  1. Type "Talk to my test app" in the Input field and hit enter. If your Action doesn't remember your name, then type "yes."
  2. Type "blue."
  3. Type "sure."
  4. Type "Indigo taco." You should be asked whether you want to pick another color.

Responding with "yes" should prompt you with the three colors again and responding with "no" should exit the conversation with a friendly message.

Congratulations!

You covered the advanced skills necessary to build conversational user interfaces with Actions on Google!

Additional learning resources

You can explore the following resources for learning about Actions on Google:

Follow @ActionsOnGoogle on Twitter to stay tuned to the latest announcements and tweet with #AoGDevs to share what you build!

Feedback survey

Before you go, please fill out this form.