Increase intent coverage and handle errors gracefully with generative fallback

1. Overview

Last Updated: 2023-08-07

What you'll build

In this codelab you'll build, deploy and configure a simple virtual agent in Dialogflow CX to assist traveling scuba divers with group bookings and private charters. The virtual agent will use Generative AI and Google's latest generative large language models (LLMs) to generate virtual agent responses.

What you'll learn

  • How to enable the relevant APIs
  • How Dialogflow automatically pre-fills page form parameter values from intent parameters
  • How to configure event handlers in Dialogflow
  • How to enable generative fallback on no-match event handlers used in flows and during parameter filling
  • How to configure your own text prompt to handle basic as well as agent specific conversational situations
  • How to write good intent and parameter descriptions to generate reprompt handlers for required parameters (in addition to user defined reprompts)
  • How to test your agent and simulate customer questions that trigger generative fallback

What you'll need

  • A Google Cloud Project
  • A browser such as Chrome

2. Getting set up

Before you can begin using the generative fallback feature in Dialogflow CX, you need to enable the Dialogflow API.

Enable Dialogflow API using the Cloud Console

  1. Open the Google Cloud console in your browser.
  2. In the Google Cloud console, navigate to the API Library to browse the APIs and services that can be enabled.
  3. Using the search bar at the top of the API Library page, search for Dialogflow API, then click the resulting service.
  4. Click the Enable button to enable the Dialogflow API in your Google Cloud project.

Using the gcloud CLI (alternative)

Alternatively, the API can be enabled using the following gcloud command:

gcloud services enable dialogflow.googleapis.com

If the API was successfully enabled, then you should see a message similar to the following:

Operation "operations/..." finished successfully.

Get the code

You will not create the virtual agent from scratch, we will provide you with an agent that you will need to restore from the Dialogflow CX Console and then improve.

To download the source code:

  1. Open a new browser tab, go to the agent repository and clone it from a command-line.
  2. The initial agent has been exported as a JSON package. Unzip the file, inspect the agent settings, take a look at the flow definition Liveaboards.json and finally browse through the flow pages, intents and entities.

3. Create a new agent

Open Dialogflow console

You'll use the Dialogflow CX console along with your Google Cloud project to perform the remaining steps in this codelab.

  1. In your browser, navigate to the Dialogflow CX console.
  2. Select the Google Cloud project that you want to use, or create a new project to use.
  3. You should see a list of agents in the Dialogflow CX console.

If this is your first time using Dialogflow CX, refer to the Dialogflow CX Documentation for more information on configuring your project and settings depending on your needs.

Create a new Dialogflow CX agent

  1. To restore the agent downloaded from the GitHub repo, you need to create a new agent. From the Dialogflow CX console, click Create new agent on the top right corner of the page.

Create a brand new agent

  1. Select the option Build your own agent.

Choose the option

  1. Complete the form with the agent settings below and click Create to create the agent.
  • As display name choose: Divebooker
  • As location choose: us-central1
  • Select your preferred time zone
  • Select en - English as default language
  1. Dialogflow will automatically open up the agent for you. We are not done yet!

Restore the Divebooker agent

  1. Go back to the agents list page, identify the agent you have just created. Click the option 78d2781c655810e7.png and then click the Restore button.
  2. Select the Upload option and then drop or select the ZIP file you have previously downloaded from the GitHub repository.
  3. Click the Restore button to import the agent we have provided

Click the Restore button to import the agent we have provided

Well done! You are finished building your diving reservation virtual agent that's ready to help your customers. In the next section, you'll test it and see how good it is at answering user questions and assisting with booking requests.

4. Test the agent

Dialogflow provides a built-in simulator to chat with your agents and uncover bugs. For each turn, you can verify correct values for the triggered intent, the agent response, the active page, and the session parameters.

We will test a few scenarios and for each scenario we will look at the reason why the agent is giving a certain response. Let's start with the first one.

Unresolved intent

  1. In the Dialogflow console and from within your agent, click Test Agent to open the Simulator.

Click Test Agent to open the Simulator

  1. Type a greeting to your agent such as Hello and ask what is a liveaboard?. The question does not not match any intents, a generic prompt like "Sorry I'm not sure how to help" is displayed. You can check that the sys.no-match-default built-in event was invoked by inspecting the original response on the Simulator.

Greet the agent and ask what a liveabord is

Scroll down almost to the end of the JSON response. Notice that when searching for a matching intent, Dialogflow finds this is a NO_MATCH and raises a no-match event.

Check that sys.no-match-default event was raised by Dialogflow

  1. Switch to the Build tab and open the Start Page of the Liveaboards flow.

Switch to the Build tab and open the Start Page of the Liveaboards flow.

By default every flow has event handlers for the no-match and no-input built-in events. These event handlers are automatically created when you create a flow, and they cannot be deleted.

  1. Click on the sys.no-match-default event handler and scroll down to the Agent responses section. Dialogflow provides a list of alternative responses but you can also define different types of response messages, to provide the end-user with more than just text responses.

Look at the pre-defined agent responses

Let's move on to the happy path now!

The happy path

In this second case, pretend to be a diver who wants to book a diving cruise for a group of 12 people to the Galapagos Islands next year in July.

  1. In the Simulator panel click the Reset icon to start a new conversation with the agent.

Reset to start a new conversation

Consider changing to vertical view for a better UX

  1. Tell the agent you would like to book a charter to the Galapagos Islands and provide the details of your travel. You don't need to use the exact same prompts below, experiment!

Test the happy path

  1. Open the Start Page and click the head.send.group.request route. Scroll down to the Transition section which tells Dialogflow the page to transition when this intent is matched.

Transition to Collect Further Info page

  1. Close the Route definition and expand the page Collect Further Info. Notice the entry fulfillment and the list of parameters.

Collect Further Info page

For each page in Dialogflow CX you can define a form, which is a list of parameters that should be collected from the end-user for the page. Note that the agent didn't ask for the travel destination because we passed it as part of the initial input and destination is also an intent parameter. When a page initially becomes active, and during its active period, any form parameter with the same name as an intent parameter is automatically set to the session parameter value and the corresponding prompt is skipped.

  1. Switch to the tab Manage and click the head.send group request intent under the Intents section. Look at the training phrases provided for this intent and the annotated parts of the training phrases.

Look at the training phrases provided for this intent and the annotated parts of the training phrases.

  1. Consider the training phrase "I need to organize a trip to Costa Rica for 15 divers". "Costa Rica" is annotated with destination and "15" with number-of-guests. When you annotate parts of a training phrase, Dialogflow recognizes that these parts are just examples of actual values that will be provided by end-users at runtime. This is why for the initial input "Do you offer charters to the Galapagos Islands?" Dialogflow extracted the destination parameter from "Galapagos Islands".

Next we will look at what happens if we don't provide the agent with a valid input when asked to fill a form parameter.

Invalid input

  1. In the Simulator panel click the Reset icon to start a new conversation with the agent.
  2. Express the intent to make a group booking, this time don't tell the agent where you want to go and when you're asked for a destination reply with a random value which is not Costa Rica, Galapagos or Mexico.

Enter an invalid destination

  1. On the Manage tab click Entity types under the Resources section. Notice two tabs: under the System tab you can find the system entities currently used by your agent. The Custom tab provides the list of the custom entities created for matching data specific to this agent.

Destination custom entity

  1. Click on the destination entity to find out what values the entity matches. "Europe" is not one of the entries and it isn't a synonym either.
  2. On the flow diagram expand the Collect Further Info page that contains the form parameters. Click the destination parameter.
  3. On the parameter panel scroll down to the Reprompt event handlers section, then click the No-match default event handler.

This parameter-level event handler is specifically intended to handle invalid end-user input during form filling. Because "Europe'' is an unexpected input, a sys.no-match-default event was invoked, and the corresponding reprompt handler defined for this event was called. The section Agent says lists two alternative re-prompt messages.

Static alternative re-prompt messages when the end-user enters an invalid destination.

Great work! These test cases represent common scenarios that the agent is expected to handle appropriately. Very often users ask questions that bots are not able to answer or they make requests that bots are unable to fulfill. It is very complex to design for the long tail meaning off the well-worn paths most users will follow. Think about all the things that can go wrong in a conversation and all the unexpected or unsupported paths users might take.

Advances in automatic speech recognition (ASR) means that we almost always know exactly what users said. However, determining what users meant is still a challenge. Utterances often can't be understood in isolation; they can only be understood in context. In the next section of this codelab we will explore how Google's latest generative large language models (LLMs) can help get the dialogue back on track and move the conversation forward.

5. Enable generative fallback

What is the generative fallback feature?

The generative fallback feature is a Dialogflow CX feature that uses Google's large language models (LLMs) to generate virtual agent responses.

How does it help?

In between key use cases there are a number of somewhat common user requests like repeating what the agent said in case the user didn't understand, holding the line when the user asks for it and summarizing the conversation. In the first test we did, the agent failed to answer the question "What is a liveaboard?" because we haven't created an intent for it and designed the flow to handle those types of generic questions related to scuba diving and liveaboards.

Even with robust intents, there is still room for error. Users may go off script by remaining silent (a No Input error) or saying something unexpected (a No Match error). While preventing errors from occurring is better than handling errors after they occur, errors cannot be totally avoided. Generic prompts like "Sorry I'm not sure how to help" or similar minimally viable solutions are often not good enough. Error prompts should be inspired by the Cooperative Principle according to which, efficient communication relies on the assumption that there's an undercurrent of cooperation between conversational participants.

In the next section we will look at how the generative fallback feature can be configured to increase intent coverage and simplify error handling for a better customer experience.

Enable generative fallback for the entire flow's no-match event

You can enable generative fallback on no-match event handlers used in flows, pages, or during parameter filling. When generative fallback is enabled for a no-match event, whenever that event triggers, Dialogflow will attempt to produce a generated response that will be said back to the user. If the response generation is unsuccessful, the regular prescribed agent response will be issued instead.

You can enable generative fallback in your agent on no-match event handlers, which can be used in flow, page or parameter fulfillment.

We will start enabling generative fallback for the entire Liveaboards flow no-match-default event.

  1. Expand the Start Page of the flow.
  2. Click sys.no-match-default under Event handlers.
  3. Check Enable generative fallback under Agent responses, then click Save

Check Enable generative fallback under Agent responses

Save to enable generative fallback on the Liveaboards Start Page

Enable generative fallback on specific no-match events

We now want to enable generative fallback to handle invalid inputs when the agent asks for the number of passengers:

  1. Open the Collect Further Info page that contains the form parameters. Click the number-of-guests parameter.
  2. Navigate to the target No-match event handler (scroll down to the Reprompt event handlers section, then click the No-match default event handler)

Navigate to the target No-match event handler (scroll down to the Reprompt event handlers section, then click the No-match default event handler)

  1. Check Enable generative fallback under Agent responses

Enable generative fallback on parameter number-of-guest

  1. Finally click Save
  2. Now repeat the exact steps to enable generative fallback for destination and email-address

Great work! You have enabled generative fallback to handle unexpected intents and invalid parameter values. Next, we will look at how to configure the generative fallback feature with a text prompt that instructs the LLM how to respond.

6. Configure generative fallback

The generative fallback feature passes a request to a large language model to produce the generated response. The request takes the form of a text prompt that is a mix of natural language and information about the current state of the agent and of the conversation. The feature can be configured in multiple ways:

  1. Choose a specific (already defined) prompt to be used for response generation.
  2. Define a custom prompt.

Choose an already defined prompt

  1. On the Dialogflow CX console click Agent Settings

Go to Agent Settings

  1. Navigate to the ML tab, and then the Generative AI sub-tab.

Generative AI sub-tab

The feature comes out of the box with two template prompts, the Default template (which is not visible) and the Example template that guides you writing your own prompts.

  1. Select the Example template and click the Edit button on the right side of the dropdown to inspect it.

Click the Edit button on the right side of the template dropdown to inspect it.

With the predefined prompt, the virtual agent can handle basic conversational situations. For example:

  • Greet and say goodbye to the user.
  • Repeat what the agent said in case the user didn't understand.
  • Hold the line when the user asks for it.
  • Summarize the conversation.

Let's try and define a specific text prompt for the Divebooker agent!

7. Define your own prompt

  1. Copy the prompt below and paste it in the Text prompt area
You are a friendly agent that likes helping traveling divers.
You are under development and you can only help
$flow-description

At the moment you can't help customers with land-based diving and courses. You cannot recommend local dive shops and diving resorts.

Currently you can $route-descriptions

The conversation between the human and you so far was:
${conversation USER:"Human:" AGENT:"AI"}

Then the human asked:
$last-user-utterance

You say:
  1. Pick Save as a new template to store the new prompt as a new template (choose a new template name) and Save at the right bottom corner of the panel.

Create a custom text prompt specific for the agent and save as a new template

  1. To actually make the newly created prompt the active prompt, you need to also Save the settings.

Save the new settings

When writing your own text prompt, be clear, concise and prescriptive. The way the prompt to the LLM is crafted can greatly affect the quality of the LLM's response. LLMs are trained to follow instructions, and thus the more your prompt looks like a precise instruction, the better results you will likely get. Craft a prompt and based on the results you get, then iterate to improve it.

To craft effective prompts, follow the following best practices:

  1. Provide a clear and concise description of the task you want the LLM to do. No more, no less. Keep it complete and short.
  2. Additionally, the prompt should be specific and well-defined, avoiding vague or ambiguous language.
  3. Break down complex tasks into smaller, more manageable pieces. By breaking the task down into smaller steps, you can help the model focus on one thing at a time and reduce the likelihood of errors or confusion.
  4. To improve response quality add examples in your prompt. The LLM learns in-context from the examples on how to respond.

When creating a prompt, in addition to a natural language description of what kind of context should be generated, the following placeholders can also be used:

  • $conversation The conversation between the agent and the user, excluding the very last user utterance. You can adapt the turn prefixes (e.g.: "Human", "AI" or "You", "Agent") in the text prompt
  • $last-user-utterance The last user utterance.
  • $flow-description The flow description of the active flow.
  • $route-descriptions The intent descriptions of the active intents.

Now that we have an initial text prompt, the next task is to ensure flow and intents have good descriptions.

8. Add flow and intent descriptions

Add the flow description

  1. To add a description to the Liveaboards flow, access the flow settings by hovering your mouse over the flow in the Flows section.

Access the flow settings by hovering your mouse over the flow in the Flows section.

  1. Click the options 78d2781c655810e7.pngbutton.
  2. Select Flow settings and add the following description (or a similar one): search, find and book liveaboards.

Add a description to the Liveaboards flow

  1. Click Save

Add the intent description

  1. Let's now add a good description to the head.send.group.request intent. Switch to the Manage tab, choose Intents under the Resources section and select head.send.group.request intent.
  2. Add the following description: assist users with group or full charter reservations. Initially collect travel details including departure period, destination, number of guests (min 4 max 15 people), contact details. The destination must be one of the following in the Pacific: Costa Rica, Mexico, Galapagos Islands

Note that the description contains important information such as the min and max number of passengers allowed on a boat. Keep this in mind!

  1. Click Save

And you're done! You have enabled generative fallback on no-match event handlers for both flow and parameter fulfillment. You have also defined your own text prompt that the generative fallback feature passes to a large language model to produce generative responses.

In the next section, you'll retest your agent to see how it can answer the same challenging questions from before.

9. Retest your agent

Now that you've configured and enabled generative fallback fallback on the virtual agent, you can ask similar challenging questions and see how it handles the responses.

Click Test Agent to open the Simulator again.

Test agent again

Ask the agent again about liveaboards and liveaboard diving. From now on note how every dialogue has user defined messages as well as generated responses highlighted in the red boxes.

Retest the agent and ask again what is a liveaboard

Did you get a nice informative response instead of a generic reprompt? Great! After providing a clear and concise description of the tasks you want the agent to fulfill (in the text prompt and in the flow description), your bot is now much smarter when it comes to answering detailed questions without creating specific intents. Your customer will appreciate that the agent can give them a more informed response instead of an inactionable response.

Don't be shy and challenge the agent, ask if it can help you find a scuba diving course since you are not yet a certified diver.

 Ask the agent if it can help you find a scuba diving course

That's right, at the moment we haven't designed the agent to assist with scuba courses. How does the agent know that? In the text prompt we have clearly outlined what the agent can and can't assist with. "At the moment you can't help customers with land-based diving and courses. You cannot recommend local dive shops and diving resorts"

Now retest the happy scenario and enrich the conversation. Let's see how the experience has changed.

Retest the happy scenario and be creative in the dialogue

Retest the happy scenario and be creative in the dialogue

When Dialogflow matches an intent or attempts to collect a parameter as per the flow design, it will display the fulfillments defined at design time. When the user goes off script requesting a summary of the travel details or offering to provide their phone number, the generative fallback feature comes into play.

Nice! You have retested the happy scenario and I hope you've had a pleasant and natural conversation with the agent as close as possible to the experience you would have with a live agent.

Unfortunately, things can go wrong in a conversation. Let's do a different test, this time when you're asked for the number of guests say a number greater than 15.

Provide a number of guests greater than 15

Provide a number of guests greater than 15

There are a couple of things to point out here:

  1. Why is 20 not a valid number? Because we have set a limit on the number of guests allowed as part of the intent description: "The agent collects info such as departure period, destination, number of guests ***(min 4 max 15 people)****, contact details*" . The generative response that LLM has returned "Sorry, we can only assist with group bookings of up to 15 guests" is perfectly congruent with the restrictions we have given on the number of guests. To further enforce this, number-of-guests is a custom RegExp entity that matches only numbers included in the range 4 - 15.
  2. The conversation goes on because in the end the user is still keen to get an offer for 15 divers. This happens frequently during natural conversations, we change our minds quite frequently! Notice how the agent is cooperative and it gently steers the user back towards the successful path.

Conversation design involves scripting one half of a dialog, hoping it's robust enough that anyone can step in and act out the other half. When designing for the long tail, developers need to focus on what the user could say at every step in your dialog to define your routes, handlers and parameters. This is why we have added the generative fallback feature to Dialogflow CX: to let developers focus on conversation design principles and less on implementation details to provide robust conversational experiences to users.

Let's do one more test, this time challenge the bot again with a place which is not in the list of available destinations like the Maldives. Then we will take a quick look at what happens behind the scenes.

Challenge the bot again with a place which is not in the list of available destinations like the Maldives

Note that since we have also enabled generative fallback on the no-match event for the destination parameter, a request is sent to a large language model to produce the generated response. The regular pre-scribed responses (under Agent says) are ignored.

The text boxes below will help you better understand how the placeholders help shape the request sent to the large language model.

This is the custom text prompt we have configured in Dialogflow with the placeholders highlighted in bold:

You are a friendly agent that likes helping traveling divers.
You are under development and you can only help
$flow-description

At the moment you can't help customers with land-based diving and courses. You cannot recommend local dive shops and diving resorts.

Currently you can $route-descriptions

The conversation between the human and you so far was:
${conversation USER:"Human:" AGENT:"AI"}

Then the human asked:
$last-user-utterance

You say:

In the text box below I've included the input received by the large language model and the output which contains the generated response that will be said back to the user:

llm_input:
You are a friendly agent that likes helping traveling divers.
You are under development and you can only help search, find and book liveaboards.

At the moment you can't help customers with land-based diving and courses. You cannot recommend local dive shops and diving resorts.

Currently you can assist users who are looking for a group reservation or a full charter. Initially collect travel details including departure period, destination, number of guests (min 4 max 15 people), contact details. The destination must be one of the following in the Pacific: Costa Rica, Mexico, Galapagos Islands.

The conversation between the human and you so far was:
Human: Hi, my name's Alessia
AI Hi Alessia, what can I help you with today?
Human: Can you help me find a nice boat for myself and my family?
AI To assist you with that I need to collect the details of your travel and then we'll get back to you with an offer shortly.
Where would you like to go? We can organize a charter in Costa Rica, Galapagos Islands and several locations around Mexico

Then the human asked:
The kids want to go to the Maldives

llm_output:
You say:
I'm sorry Alessia, we can only help you with liveaboards in Costa Rica, Galapagos Islands and several locations around Mexico.

Similarly to the test done previously, the response sent back to the user is generated by the model and relies on the information we have provided as part of the intent description: "The destination must be one of the following in the Pacific: Costa Rica, Mexico, Galapagos Islands"

Modify the list of banned phrases

The generative fallback feature can be configured in multiple ways:

  1. Choose a specific (already defined) prompt to be used for response generation.
  2. Define a custom prompt.
  3. Change the list of banned phrases.

So far we have looked at the first two ways. Let's explore the third one.

  1. In Agent Settings, navigate to the ML tab, and then the Generative AI sub-tab.
  2. In the Banned phrases section add the following sentences to the list:
  3. Dangerous country
  4. Hateful place
  5. Medical assistance
  6. Click Save.
  7. Click the Reset icon and retest the last scenario. Instead of providing a beautiful diving destination around the world enter one of the banned phrases.

Test one of the banned phrases

The prompt and the generated response are checked against the list of banned phrases. Banned phrases are phrases that are banned for generative AI. If the input includes banned phrases, or phrases deemed unsafe, generation will be unsuccessful, and the regular prescribed response (under Agent says in the same fulfillment) will be issued instead.

Excellent stuff! We have covered an array of conversational situations where generative responses can really make a difference. Feel free to keep testing!

10. Congratulations

Well done on completing the codelab! Time to chill!

Cbo Chill

You've successfully created a virtual agent and you've enabled generative fallback on no-match event handlers used in flows, and during parameter filling.

The generative fallback feature combined with good flow and intent descriptions can provide agent's specific and cooperative responses as opposed to generic prompts like "Sorry I'm not sure how to help" or "Sorry, you've entered an invalid option". Error prompts generated by large language models can gently steer users back towards successful paths or reset their expectations about what is and isn't possible.

Feel free to test other conversational situations and explore the other functionality available related to Dialogflow CX and generative AI.

Clean Up

You can perform the following cleanup to avoid incurring charges to your Google Cloud account for the resources used in this codelab:

  • Navigate to the Dialogflow CX console and delete all of the agents that you created.
  • In the Google Cloud console, navigate to the APIs and Services page and disable the Dialogflow API.

Further reading

Continue learning about conversational AI and generative AI with these guides and resources:

License

This work is licensed under a Creative Commons Attribution 2.0 Generic License.