In this codelab, you'll build a simple web app that talks to the Works with Nest API using Python and the Flask web framework. You'll use a Nest Cam to take snapshots, and then the images will be processed and classified by TensorFlow.

What you'll learn

What you'll need

You can either download all the sample code to your computer...

Download Zip

... or clone the GitHub repository from the command line.

$ git clone

Open your terminal and cd into the nest-tensorflow directory.

$ cd ~/nest-tensorflow

This project uses Python's PIP to manage dependent packages. Go to the nest-tensorflow directory and run the following commands.

(Optional) Spin up a virtual environment:

$ pip install virtualenv
$ virtualenv env
$ . env/bin/activate

Install the dependencies. Omit the --user flag is you are using a virtual environment.

Install the dependencies (virtualenv):

(env) $ pip install -r requirements.txt


$ pip install --user -r requirements.txt

If you don't already have a Nest Cam set up and associated with your account, use one of the following procedures.

Set up a Nest Cam with a Mac or Windows computer


Set up a Nest Cam with the Nest App

In order to authorize your Nest Cam's integration with TensorFlow, you need a Product ID and Product Secret.

Navigate to and sign in with your Nest account credentials.

Accept the Terms of Service and click Create New Product.

This brings you to a page where you can fill in the product details and permissions.

Click Create Product.

After your product is created, click the product to view its Overview.

At this point, we need to copy two values into our Python. Open wwn/ and find the following lines of code.

product_id =            ''
product_secret =        ''

Copy the Product ID and paste it into the product_id variable.

Then copy the Product Secret and paste it into the product_secret variable.

To implement the POST request, open wwn/ and copy in the function that performs the exchange:

def get_access_token(authorization_code):
    data = urllib.urlencode({
        'client_id':     product_id,
        'client_secret': product_secret,
        'code':          authorization_code,
        'grant_type':    'authorization_code'
    req = urllib2.Request(nest_access_token_url, data)
    response = urllib2.urlopen(req)
    data = json.loads(
    return data['access_token']

This will perform a POST request and parse the JSON response. From the JSON response, we get the access_token value and return it.

To connect to the API, you need to handle the endpoint in your app. Open the file at the root of the project.

The script is the main entry point into our application. This file defines each route that the webserver listens and responds to. There are existing routes, such as /, /login, and /logout that are defined. We will add a new route at /callback like this:

def callback():
    authorization_code = request.args.get("code")
    global token
    token = wwn.get_access_token(authorization_code)
    return redirect(url_for('index'))

This /callback endpoint parses the code param and stores the value in the authorization_code variable. It then calls the wwn.get_access_token() method to exchange the authorization_code for an access token. Once we have a token, we will store it and redirect back to the homepage.

We need to implement the functionality of this endpoint so that it facilitates the exchange. The server needs to make an x-www-form-urlencoded HTTP Post request to The body contains the Product ID, Product Secret, and Authorization Code. The response to the POST request returns the Nest API Access Token that allows us to retrieve a user's account and device information.

To implement the endpoint open and add:

def api():
    global token
    if token == "":
        return "", 400
        image_url = wwn.fetch_snapshot_url(token)
    except APIError as err:
        return jsonify(err.result)
    return jsonify(codelab.classify_remote_image(image_url))

Note the use of the wwn.fetch_snapshot_url() method that we will define in a later step. The codelab.classifiy_remote_image takes the snapshot's URL, downloads it, and classifies it with TensorFlow producing a payload compliant with the JSON.

After we get our access token, we will need to fetch the JSON payload and parse out the snapshot_url. To do this, copy all of the following code snippets into wwn/

Start by replacing the fetch_snapshot_url() method in wwn/

def fetch_snapshot_url(token):
    headers = {
        'Authorization': "Bearer {0}".format(token),
    req = urllib2.Request(nest_api_url, None, headers)
    response = urllib2.urlopen(req)
    data = json.loads(

At this point, the data variable will contain the full JSON document. However, we cannot assume that all Nest Users will have Nest Cams, so there is some extra logic needed for this function. If the account does not have any devices, there will be no devices key in the JSON payload. We therefore need to check if the devices key/value exists.

Append the following if block to the fetch_snapshot_url() method:

    # Verify the account has devices
    if 'devices' not in data:
        raise APIError(error_result("Nest account has no devices"))
    devices = data["devices"]

Now that we have checked devices, we need to check whether the account has cameras.

Append the following if block to the fetch_snapshot_url() method:

    # Verify the account has cameras
    if 'cameras' not in devices:
        raise APIError(error_result("Nest account has no cameras"))
    cameras = devices["cameras"]

Now that we have checked cameras, we will make sure we have at least one and return the snapshot_url field from the first one.

Append the following if block to the fetch_snapshot_url() method:

    # Verify the account has 1 Nest Cam
    if len(cameras.keys()) < 1:
        raise APIError(error_result("Nest account has no cameras"))

    camera_id = cameras.keys()[0]
    camera = cameras[camera_id]

    # Verify the Nest Cam has a Snapshot URL field
    if 'snapshot_url' not in camera:
        raise APIError(error_result("Camera has no snapshot URL"))
    snapshot_url = camera["snapshot_url"]

    return snapshot_url

Now that we have implemented the wwn.get_access_token() method, our /callback route is complete. Let's fire up the application.

If you're using a virtual environment, spin it up:

$ virtualenv env
$ . env/bin/activate


$ python

If you are prompted, allow incoming network connections.

Open http://localhost:5000/.

The page should look like this:

At this point, the /callback endpoint is hit, and we should see the /callback request and subsequent / redirect in the terminal logs.

Click Login.

The logs now display a new GET /login line.

When we log in, we are redirected to the Nest Authorization screen. On the Nest Authorization screen, click Accept.

When we accept the integration, the Nest Authorization screen redirects to the Redirect URL configured for our product integration (http://localhost:5000/callback).

http://localhost:5000/ should look like this:

Place an object in front of the Nest Cam and click Fetch & Classify New Image.

The JavaScript requests the /api endpoint you implemented, and you should see the request in the terminal logs. It may take approximately 20 seconds for the page to change; be patient.

The page displays the image and top 5 classifications for what is in the image.

In the terminal, you will see a log line for each request made to the app.

Normally, you may store the token in an encrypted client side session/cookie or a server side database. However, for this codelab we are storing the access token on the file system. We can inspect the value of the token by opening tmp/token.txt. If the /callback route was successfully implemented, we should have a one-line value in this text file.

If you want to see what's returned by the Nest API, keep running and open a second terminal window. In the second window, run this cURL command to issue a GET request using the token from tmp/token.txt:

$ curl -L -H 'authorization: Bearer <paste token here>'

This cURL request fetches the complete document available with the token. An example response is provided here for an account with one paired Nest Cam. Some fields have been omitted for brevity:

    "devices": {
        "cameras": {
            "1UPN8x7PvtufsdiAXHEGvhMT_6hoQptUo8GuazzNjMnWsyvl6XcP4A": {
                "name": "Desk",
                "software_version": "205-600055",
                "where_id": "T6n2MsY_ooeTjE04K0KmvNOIwbBfpKbmWFBKhVUfKzUXs6RK6Kv6iA",
                "device_id": "1UPN8x7PvtufsdiAXHEGvhMT_6hoQptUo8GuazzNjMnWsyvl6XcP4A",
                "structure_id": "cene_40i14oSrQBZAZuJqbVpNt47SRdFOua2w_-oGqrqTVcgjR2hgw",
                "is_online": true,
                "is_streaming": true,
                "is_audio_input_enabled": true,
                "last_is_online_change": "2017-04-27T00:54:20.000Z",
                "is_video_history_enabled": true,
                "is_public_share_enabled": false,
                "last_event": {...},
                "name_long": "Desk Camera",
                "web_url": "",
                "app_url": "nestmobile://cameras/...",
                "snapshot_url": ""
    "structures": {...}
    "metadata": {...}

For this codelab, we are most interested in the snapshot_url field of a camera. The snapshot_url field returns the URL to an image captured from the Nest Cam's live video stream. You can find more information about the snapshot_url and other Camera fields at

The code implements an endpoint for our JavaScript to consume via an HTTP GET request.

The JavaScript expects the image_url key to contain a full URL to a JPEG file to be used as the src attribute for an img HTML tag. The results key will contain the top 5 TensorFlow classification results for the image.

The JSON payload is structured as follows:

    "image_url": "https://snapshot_url",
    "results": {
        "teddy, teddy bear": 84.68%
        "seat belt, seatbelt": 0.85%
        "toyshop": 0.37%
        "ballpoint, ballpoint pen, ballpen, Biro": 0.25%
        "hamper": 0.18%

This JSON endpoint classifies the Nest Cam snapshot using a pretrained Inception-v3 TensorFlow model. This model was trained with images of bananas that are drastically different that what our Nest Cam sees in this codelab–that's what makes this truly incredible. The neural network is able to recognize the image contents in a fashion similar to a human brain.

Congratulations, the app is now feature complete. You can take an image from a Nest Cam and classify the contents using TensorFlow.

What we've covered

Next Steps

Learn More