At Google, we believe the future is artificial intelligence first. Artificial intelligence, is making computers "smart" so they can think on their own.
We have been investing heavily in the areas of:
These things come together in the Google Assistant. It allows you to have a conversation with Google.
Let's see together how it's all working with a new way for us to hear jokes about animals, cool?
Google Home is a voice activated speaker that users keep in their home.
The Google Assistant is the conversation between the users and Google. They can get things done by talking with the Assistant. There are many things users can do by just using the Assistant directly. To learn more about the assistant check out this short video: https://www.youtube.com/watch?v=FPfQMVf4vwQ
Actions on Google allows developers to extend the assistant. A developer can build an app for the Assistant as well, but instead of opening it through your phone, you open it by talking to the Assistant!
That is what we are going to focus on today in our animal joke example.
This codelab will walk you through creating your own Action on Google with API.AI. We are going to use API.AI which is a conversational user experience platform, or in other words, it will help us ‘talk' to machines in a way they will understand us better.
This codelab will include the design considerations, as well as implementation details, to ensure that your action meets the key principles.
In this codelab, you're going to build an "Animal Joke" action.
So how does a conversation action work?
The user needs to invoke your action. You say a phrase like "Ok Google, talk to Animal joke". This tells Google the name of the action to talk to.
From this point onwards, the user is now talking to your conversation action. Your action generates dialog output, which is then spoken to the user. The user then makes requests, your action processes it, and replies back again. The user has a two way dialog until the conversation is finished.
See below, if you like diagrams to ‘see' what we explained above.
API.AI lets the machine understand what the user is trying to say, and can provide the response. You type in example sentences of things that a user might speak.
You can specify what values you need to get from the user. API.AI then uses machine learning to understand the sentences and manage the conversation.
Click the following link to login to API.AI - https://console.API.AI/
After the login you can create your first agent.
You will need to:
Entities are the values we are trying to capture from the user phrases. Kind of like filling out a form, requesting details from the user. API.AI looks to extract these out, and will do follow up prompts until done.
This is how an entity looks in API.AI
We will create an Animal entity.
First step is to click on ‘Entities' menu item on the left and then on ‘Create Entity' button.
Next you should start typing animals' names. Don't forget to give our entity a name (e.g. Animal) and click on the ‘save' button after few animal's names.
The final results should look similar to the image below.
An Intent is triggered by a series of "user says" phrases. This could be something like "please tell me an animal joke" or "Give me a recipe for burger".
You need to specify enough sentences to train API.AI's machine learning algorithm. Then even if the user doesn't say exactly the words you typed here, API.AI can still understand them!
You should create separate intents for different types of actions though.
Don't try to combine all of them together.
In our example, we will create only two intents:
After we have our new $Animal entity. If you notice the $ before the word - It's not a mistake. This is the way we will refer to our new entity from now. Think of it as a special sign to show us that we are referring to our entity and not just another animal.
It's time to create the intent that will tell us the jokes.
First, click on ‘Intents' in the left menu and then on the ‘Create Intent' button.
Second, start typing few sentences that you will want to use to get a joke. For example, "please tell me a joke about dogs". Type a few sentences so API.AI could start training its algorithms. You can see that while you type, API.AI automatically recognizes that the phrase includes one of the entities, so it highlights it. Next, give your intent a name (e.g. tell_joke) and hit the ‘save' button.
See below how it should look like.
We will skip the 'Events' section for now. In the 'Action' section we need to make sure that our @Animal entity is marked Required. In the Prompts we should type "Please tell me which animal you like" (see the screenshot below), so in cases where the user didn't name an animal, it will be clear to her that we need this entity.
Finally, we will fill ‘Text Response' section with our most amazing jokes. You can take few ideas from the image below or type one or two good jokes that you know.
Please note that we are using the $Animal value in our response in order to create a joke that is based on the animal that the user asked. The $ sign is to mark that we are working with a variable. The @ sign is to identify an entity.
After you fill all your amazing jokes, don't forget to click on the ‘save' button on the top-right corner of the screen.
A good design principle is to allow our user to end the conversation.
Click on the ‘Create Intent' button again. Then, start typing few sentences that will end the conversation. For example, "bye bye" or "bye animal joker". Make sure to give the intent a name at the top of the screen (e.g Quit). Below is an example of what this intent should look like.
Last, but not least, you need to check the ‘end conversation' checkbox so that it will know to really end the conversation at this point.
It's very important to check our work while we are developing it. Luckily for us, it's very easy to do it with API.AI. All you need to do after you created the new intent is to look at the right side of the screen. Type what you wish to test and you will get the response.
In the example below, we type "please tell me a joke about dino" and as you can see it's working nicely, since we got a joke in the response.
A quick way to test how our new action works on Google home will be to use the web simulator.
We need to click on ‘Integrations' in the left side menu and then click on "Actions on Google".
Click on the ‘TEST' button and you will get this message:
Test now active
View on the Actions on Google Web Simulator or any Actions on Google enabled devices you are signed in to
After that, you will need to click on View and the Web Simulator will open in a new tab.
On the left side, you can type (or talk) your commands and on the right side, you can see the responses. Please remember to start with ‘talk to animal joker' so it will know to open our action.
We can also test our action with the automated page that API.AI created for our new bot. You need to click on ‘Integrations' page and you will see the image below:
First, you need to click on ‘Web Demo' so this new action will be available to the world. Then, you can customize the url so it will be easier to remember it. In our case, we typed: "animal-joker". You will need to add something in the end of this name, because it must be a unique url.
Now, you can try it.
You will see this screen:
It's very powerful, as you can now share your creation with the world and it will work on any device that is connected to the internet and got a browser.
Please remember that in order to make a really good Action, you need to think carefully about the design.
Designing a spoken dialog between a human and a computer - in advance - accounting for all the possibilities, both in function and user behavior, and still have it feel natural - is harder than it looks.
The key to building a good voice interface is to not fall into the trap of simply converting a graphical user interface into a Voice User Interface. This defeats the purpose of using a conversation. People are not going to change how they talk anytime soon, so take what we know about human-to-human conversation to teach our computers to talk to humans. Not the other way around.
The persona should be based on: Your user population and their needs and the imagery & qualities associated with your agent's brand.
2. Think outside the box, literally.
You should write your core experiences like you would a screenplay. This can be as scrappy as acting it out with a colleague and documenting it on paper, or creating an interactive wizard-of-oz prototype you tweak and play with until you're ready to start coding.
And then, when you draw out your initial vision, keep it at a high level, where the boxes represent entire dialogs or user intents, but leave out the individual wording you'll use in the interaction.
3. In conversations, there are no "errors"
When a problem happens, imagine what the user hears when your action says "I don't understand YOU".
This is one of the greatest cause of user frustration and aversions to voice interfaces. People get angry and raise their voice and repeat the same answer again!
We need to give people credit for knowing how to speak. Just because they don't understand a prompt or choices, doesn't mean they don't speak the language. So help them be successful.
And remember what happens along the way, maintain context.
The best way to course-correct in advance while still maintaining a natural, comfortable conversation, is to plan for these occurrences as if they were any other turn in a conversation - i.e. to treat them as input that didn't cause the "error" in and of themselves.
We have a great set of design materials on our developer site to help you think about how to design better conversations.
Click the following link to download all the code for this codelab:
Unpack the downloaded zip file. This will unpack a root folder (
animal-joker), which contains a zip file with all the definitions this codelab, along with all of the resources you will need. This is the zip file (
AnimalJokesForKids.zip) that you will need to import.
After you download the source code, you need to open the with all of the resources you will need. You can import this action with 3 easy steps. See the image below.
Please note, that at any time during the development process, you can restore this action from the same zip and get all the data we showed in this code lab.