The Cloud Natural Language API lets you extract entities, and perform sentiment and syntactic analysis on a block of text.

In this lab, we'll learn how to use each of the three Natural Language API methods: analyzeEntities, analyzeSentiment, and annotateText.

What you'll learn

What you'll need

How will you use this tutorial?

Read it through only Read it and complete the exercises

How would rate your experience with Google Cloud Platform?

Novice Intermediate Proficient

Self-paced environment setup

If you don't already have a Google Account (Gmail or Google Apps), you must create one. Sign-in to Google Cloud Platform console (console.cloud.google.com) and create a new project:

Remember the project ID, a unique name across all Google Cloud projects (the name above has already been taken and will not work for you, sorry!).

New users of Google Cloud Platform are eligible for a $300 free trial.

Click on the menu icon in the top left of the screen.

Select API Manager from the drop down.

Click on Enable API.

Then, search for "language" in the search box. Click on Google Cloud Natural Language API:

Click Enable to enable the Cloud Natural Language API:

Wait for a few seconds for it to enable. You will see this once it's enabled:

Google Cloud Shell is a command line environment running in the Cloud. This Debian-based virtual machine is loaded with all the development tools you'll need (gcloud, bq, git and others) and offers a persistent 5GB home directory. We'll use Cloud Shell to create our request to the Natural Language API.

To get started with Cloud Shell, Click on the "Activate Google Cloud Shell" icon in top right hand corner of the header bar

A Cloud Shell session opens inside a new frame at the bottom of the console and displays a command-line prompt. Wait until the user@project:~$ prompt appears

Since we'll be using curl to send a request to the Natural Language API, we'll need to generate an API key to pass in our request URL. To create an API key, navigate to the API Manager section of your project dashboard:

Then, navigate to the Credentials tab and click Create credentials:

In the drop down menu, select API key:

Next, copy the key you just generated. You will need this key later in the lab.

Now that you have an API key, save it to an environment variable to avoid having to insert the value of your API key in each request. You can do this in Cloud Shell. Be sure to replace <your_api_key> with the key you just copied.

export API_KEY=<YOUR_API_KEY>

The first Natural Language API method we'll use is analyzeEntities. With this method, the API can extract entities (like people, places, and events) from text. To try it out the API's entity analysis, we'll use the following sentence from a recent news article:

LONDON — J. K. Rowling always said that the seventh Harry Potter book, "Harry Potter and the Deathly Hallows," would be the last in the series, and so far she has kept to her word.

You can build your request to the Natural Language API in a request.json file. First create this file in Cloud Shell:

touch request.json

Open it using your preferred command line editor (nano, vim, emacs). Add the following to your request.json file:

request.json

{
  "document":{
    "type":"PLAIN_TEXT",
    "content":"LONDON — J. K. Rowling always said that the seventh Harry Potter book, ‘Harry Potter and the Deathly Hallows,' would be the last in the series, and so far she has kept to her word."
  }
}

In the request, we tell the Natural Language API about the text we'll be sending. Supported type values are PLAIN_TEXT or HTML. In content, we pass the text to send to the Natural Language API for analysis. The Natural Language API also supports sending files stored in Cloud Storage for text processing. If we wanted to send a file from Cloud Storage, we would replace content with gcsContentUri and give it a value of our text file's uri in Cloud Storage.

You can now pass your request body, along with the API key environment variable you saved earlier, to the Natural Language API with the following curl command (all in one single command line):

curl "https://language.googleapis.com/v1beta1/documents:analyzeEntities?key=${API_KEY}" \
  -s -X POST -H "Content-Type: application/json" --data-binary @request.json

The beginning of your response should look like the following:

{
  "entities": [
    {
      "name": "Rowling",
      "type": "PERSON",
      "metadata": {
        "wikipedia_url": "http://en.wikipedia.org/wiki/J._K._Rowling"
      },
      "salience": 0.56932539,
      "mentions": [
        {
          "text": {
            "content": "J. K.",
            "beginOffset": -1
          }
        },
        {
          "text": {
            "content": "Rowling",
            "beginOffset": -1
          }
        }
      ]
    },
    ...
  ]
}

In the response, you can see that the API detected four entities from the sentence. For each entity, we get the entity type, the associated Wikipedia URL if there is one, the salience, and the indexes of where this entity appeared in the text. Salience is a number in the [0,1] range that refers to the centrality of the entity to the text as a whole. In the sentence above, "Rowling" returned the highest salience value since she is the subject of the sentence. The Natural Language API can also recognize the same entity mentioned in different ways. For example, "Rowling," "J.K. Rowling," or even "Joanne Kathleen Rowling" all point to the same Wikipedia entry.​

In addition to extracting entities, the Natural Language API also lets you perform sentiment analysis on a block of text. Our JSON request will include the same parameters as our request above, but this time we'll change the text to include something with a stronger sentiment. Replace your request.json file with the following, and feel free to replace the content below with your own text:

request.json

 {
  "document":{
    "type":"PLAIN_TEXT",
    "content":"I love everything about Harry Potter. It's the greatest book ever written."
  }
}

Next we'll send the request to the API's analyzeSentiment endpoint:

curl "https://language.googleapis.com/v1beta1/documents:analyzeSentiment?key=${API_KEY}" \
  -s -X POST -H "Content-Type: application/json" --data-binary @request.json

Your response should look like this:

{
  "documentSentiment": {
    "polarity": 1,
    "magnitude": 1.8
  },
  "language": "en"
}

The sentiment method returns two values, polarity and magnitude. Polarity is a number from -1.0 to 1.0 indicating how positive or negative the statement is. Magnitude is a number ranging from 0 to infinity that represents the weight of sentiment expressed in the statement, regardless of polarity. Longer blocks of text with heavily weighted statements have higher magnitude values. The polarity of our statement above is 100% positive, and the words "love", "greatest", and "ever" contribute to the magnitude value.

Looking at the Natural Language API's third method - text annotation - we'll dive deeper into the the linguistic details of our text. annotateText is an advanced method that provides a full set of details on the semantic and syntactic elements of the text. For each word in the text, the API will tell us the word's part of speech (noun, verb, adjective, etc.) and how it relates to other words in the sentence (Is it the root verb? A modifier?).

Let's try it out with a simple sentence. Our JSON request will be similar to the ones above, with the addition of a features key. This will tell the API that we'd like to perform syntax annotation. Replace your request.json with the following:

request.json

 {
  "document":{
    "type":"PLAIN_TEXT",
    "content":"Joanne Rowling is a British novelist, screenwriter and film producer."
  },
  "features":{
    "extractSyntax":true
  }
}

Then call the API's annotateText method:

curl "https://language.googleapis.com/v1beta1/documents:annotateText?key=${API_KEY}" \
  -s -X POST -H "Content-Type: application/json" --data-binary @request.json

The response should return an object like the one below for each token in the sentence:

{
      "text": {
        "content": "Joanne",
        "beginOffset": -1
      },
      "partOfSpeech": {
        "tag": "NOUN"
      },
      "dependencyEdge": {
        "headTokenIndex": 1,
        "label": "NN"
      },
      "lemma": "Joanne"
}

Let's break down the response. partOfSpeech tells us that "Joanne" is a noun. dependencyEdge includes data that you can use to create a dependency parse tree of the text. Essentially, this is a diagram showing how words in a sentence relate to each other. A dependency parse tree for the sentence above would look like this:

The headTokenIndex in our response above is the index of the token that has an arc pointing at "Joanne". We can think of each token in the sentence as a word in an array, and the headTokenIndex of 1 for "Joanne" refers to the word "Rowling," which it is connected to in the tree. The label NN (short for noun compound modifier) describes the word's role in the sentence. "Joanne" modifies "Rowling," the subject of the sentence. lemma is the canonical form of the word. For example, the words run, runs, ran, and running all have a lemma of run. The lemma value is useful for tracking occurrences of a word in a large piece of text over time.

The Natural Language API also supports entity analysis and syntax annotation in Spanish and Japanese. Let's try the following entity request with a sentence in Japanese:

request.json

{
  "document":{
    "type":"PLAIN_TEXT",
    "content":"日本のグーグルのオフィスは、東京の六本木ヒルズにあります"
  }
}

Notice that we didn't tell the API which language our text is, it can automatically detect it. Next, we'll send it to the analyzeEntities endpoint:

curl "https://language.googleapis.com/v1beta1/documents:analyzeEntities?key=${API_KEY}" \
  -s -X POST -H "Content-Type: application/json" --data-binary @request.json

And we get the following response:

{
  "entities": [
    {
      "name": "日本",
      "type": "LOCATION",
      "metadata": {
        "wikipedia_url": "http://ja.wikipedia.org/wiki/%E6%97%A5%E6%9C%AC"
      },
      "salience": 0,
      "mentions": [
        {
          "text": {
            "content": "日本",
            "beginOffset": -1
          }
        }
      ]
    },
    {
      "name": "東京",
      "type": "LOCATION",
      "metadata": {
        "wikipedia_url": "http://ja.wikipedia.org/wiki/%E6%9D%B1%E4%BA%AC"
      },
      "salience": 0,
      "mentions": [
        {
          "text": {
            "content": "東京",
            "beginOffset": -1
          }
        }
      ]
    },
    {
      "name": "六本木ヒルズ",
      "type": "LOCATION",
      "metadata": {
        "wikipedia_url": "http://ja.wikipedia.org/wiki/%E5%85%AD%E6%9C%AC%E6%9C%A8%E3%83%92%E3%83%AB%E3%82%BA"
      },
      "salience": 0,
      "mentions": [
        {
          "text": {
            "content": "六本木ヒルズ",
            "beginOffset": -1
          }
        }
      ]
    }
  ],
  "language": "ja"
}

The wikipedia URLs even point to the Japanese Wikipedia pages, cool!

You've learned how to perform text analysis with the Cloud Natural Language API by extracting entities, analyzing sentiment, and doing syntax annotation.

What we've covered

Next Steps