TensorFlow is a multipurpose machine learning framework. TensorFlow can be used anywhere from training huge models across clusters in the cloud, to running models locally on an embedded system like your phone.

What you'll Learn

What you will build

A simple camera app that runs a TensorFlow image recognition program to identify flowers.

CC-BY by Felipe VenĂ¢ncio

Most of this codelab will be using the terminal. Open it now.

Install TensorFlow

Before we can begin the tutorial you need to install TensorFlow version 1.7, and PILLOW.

If you have a working python environment you can install them with:

pip install --upgrade  "tensorflow==1.7.*"

pip install PILLOW

If this doesn't work, follow the instructions here.

If you have the git repository from the first codelab

This codelab uses files generated during the TensorFlow for Poets 1 codelab. If you have not completed that codelab we recommend you go do it now. If you prefer not to, instructions for downloading the missing files are given in the next subsection.


In TensorFlow for Poets 1, you also cloned the relevant files for this codelab. Ensure that it is your current working directory, checkout the branch and check the contents, as follows:

cd tensorflow-for-poets-2
ls

This directory should contain three other subdirectories:

ls tf_files/
retrained_graph.pb  retrained_labels.txt

Otherwise (if you don't have the files from the first Codelab)

Clone the Git repository

The following command will clone the Git repository containing the files required for this codelab:

git clone https://github.com/googlecodelabs/tensorflow-for-poets-2

Now cd into the directory of the clone you just created. That's where you will be working for the rest of this codelab:

cd tensorflow-for-poets-2

The repo contains three directories: android/, scripts/, and tf_files/

Checkout the files from the end_of_first_codelab branch

git checkout end_of_first_codelab

ls tf_files

Test the model

Next, verify that the model is producing reasonable results before starting to modifying it.

The scripts/ directory contains a simple command line script, label_image.py, to test the network. Now we'll test label_image.py on this picture of some daisies:

flower_photos/daisy/3475870145_685a19116d.jpg

Image CC-BY, by Fabrizio Sciami

Now test the model. If you are using a different architecture you will need to set the "--input_size" flag.

python -m scripts.label_image \
  --graph=tf_files/retrained_graph.pb  \
  --image=tf_files/flower_photos/daisy/3475870145_685a19116d.jpg

The script will print the probability the model has assigned to each flower type. Something like this:

Evaluation time (1-image): 0.140s

daisy 0.7361
dandelion 0.242222
tulips 0.0185161
roses 0.0031544
sunflowers 8.00981e-06

This should hopefully produce a sensible top label for your example. You'll be using this command to make sure you're still getting sensible results as you do further processing on the model file to prepare it for use in a mobile app.

Using TOCO

Mobile devices have significant limitations, so any pre-processing that can be done to reduce an app's footprint is worth considering. With TFLite a new graph converter is now included with the TensorFlow installation. This program is called the "TensorFlow Lite Optimizing Converter" or TOCO.

It is installed as a command line script, with TensorFlow, so you can easily access it. To check that toco is correctly installed on your machine, try printing the TOCO help, with the following command:

toco --help

We will use toco to optimize our model, and convert it to the TFLite format. toco can do this in a single step, but we will do it in two so that we can try out optimized model in between.

Convert to model to TFLite format

While toco has advanced capabilities for dealing with quantized graphs, it also applies several optimizations that are still useful for our graph, (which does not use quantization). These include pruning unused graph-nodes, and performance improvements by joining operations into more efficient composite operations.

The pruning is especially helpful given that TFLite does not support training operations yet, so these should not be included in the graph.

While TOCO can be used to optimize regular graph.pb files, TFLite uses a different serialization format from regular TensorFlow. TensorFlow uses Protocol Buffers, while TFLite uses FlatBuffers.

The primary benefit of FlatBuffers comes from the fact that they can be memory-mapped, and used directly from disk without being loaded and parsed. This gives much faster startup times, and gives the operating system the option of loading and unloading the required pages from the model file, instead of killing the app when it is low on memory.

We can create the TFLite FlatBuffer with the following command:

IMAGE_SIZE=224
toco \
  --graph_def_file=tf_files/retrained_graph.pb \
  --output_file=tf_files/optimized_graph.lite \
  --input_format=TENSORFLOW_GRAPHDEF \
  --output_format=TFLITE \
  --input_shape=1,${IMAGE_SIZE},${IMAGE_SIZE},3 \
  --input_array=input \
  --output_array=final_result \
  --inference_type=FLOAT \
  --input_data_type=FLOAT

This should output a "optimized_graph.lite" in your "tf_files" directory.

Install AndroidStudio

If you don't have it installed already, go install AndroidStudio 3.0+.

Open the project with AndroidStudio

Open a project with AndroidStudio by taking the following steps:

  1. Open AndroidStudio. After it loads select " Open an existing Android Studio project" from this popup:

  1. In the file selector, choose tensorflow-for-poets-2/android/tflite from your working directory.
  1. You will get a "Gradle Sync" popup, the first time you open the project, asking about using gradle wrapper. Click "OK".

The app can run on either the a real Android device or in the Android Studio Emulator.

Set up an Android device

You can't load the app from android studio onto your phone unless you activate "developer mode" and "USB Debugging". This is a one time setup process.

Follow these instructions.

Or set up the emulator with camera access

Android studio makes setting up an emulator easy. Since this app uses the camera, you may want want to setup the emulator's camera to use your computer's camera instead of the default test pattern.

To do this you need to create a new device in the "Android Virtual Device Manager", which you can access with this button. From the main ADVM page select "Create VIrtual Device":

Then on the "Verify Configuration" page, the last page of the virtual device setup, select "Show Advanced Settings":

With the advanced settings shown, you can set both camera sources to use the host computer's webcam:

Test Build and install the app

Before making any changes to the app let's run the version that ships with the repository.

Run a Gradle sync, , and then hit play, , in Android Studio to start the build and install process.

Next you will need to select your device from this popup:

Now allow the Tensorflow Demo to access your camera and files:

Now that the app is installed, click the app icon, , to launch it. This version of the app uses the standard MobileNet, pre-trained on the 1000 ImageNet categories. It should look something like this ("Android" is not one of the available categories):

The default app setup classifies images into one of the 1000 ImageNet classes, using the standard MobileNet, without the retraining we did in part 1.

Now let's modify the app so that the app will use our retrained model for our custom image categories.

Add your model files to the project

The demo project is configured to search for a graph.lite, and a labels.txt files in the android/tflite/app/src/main/assets/ directory. Replace those two files with your versions. The following command accomplishes this task:

cp tf_files/optimized_graph.lite android/tflite/app/src/main/assets/graph.lite 
cp tf_files/retrained_labels.txt android/tflite/app/src/main/assets/labels.txt 

Run your app

In Android Studio run a Gradle sync, , so the build system can find your files, and then hit play, , to start the build and install process as before.

It should look something like this:

CC-BY by Felipe VenĂ¢ncio

You can hold the power and volume-down buttons together to take a screenshot.

Now try a web search for flowers, point the camera at the computer screen, and see if those pictures are correctly classified.

Or have a friend take a picture of you and find out what kind of TensorFlower you are !

So now that you have the app running, let's look at the TensorFlow Lite specific code.

TensorFlow-Android AAR

This app uses a pre-compiled TFLite Android Archive (AAR). This AAR is hosted on jcenter.

The following lines in the module's build.gradle file include the newest version of the AAR, from the TensorFlow bintray maven repository, in the project.

build.gradle

repositories {
    maven {
        url 'https://google.bintray.com/tensorflow'
    }
}

dependencies {
    // ...
    compile 'org.tensorflow:tensorflow-lite:+'
}

We use the following block, to instruct the Android Asset Packaging Tool that .lite or .tflite assets should not be compressed. This is important as the .lite file will be memory-mapped, and that will not work when the file is compressed.

build.gradle

android {
    aaptOptions {
        noCompress "tflite"
        noCompress "lite"
    }
}

Using theTFLite Java API

The code interfacing to the TFLite is all contained in ImageClassifier.java.

Setup

The first block of interest is the constructor for the ImageClassifier:

ImageClassifier.java

ImageClassifier(Activity activity) throws IOException {
    tflite = new Interpreter(loadModelFile(activity));
    labelList = loadLabelList(activity);
    imgData =
        ByteBuffer.allocateDirect(
            4 * DIM_BATCH_SIZE * DIM_IMG_SIZE_X * DIM_IMG_SIZE_Y * DIM_PIXEL_SIZE);
    imgData.order(ByteOrder.nativeOrder());
    labelProbArray = new float[1][labelList.size()];
    Log.d(TAG, "Created a Tensorflow Lite Image Classifier.");
}

There are a few lines that should be discussed in more detail.

The following line creates the TFLite interpreter:

ImageClassifier.java

tflite = new Interpreter(loadModelFile(activity));

This line instantiates a TFLite interpreter. The interpreter does the job of a tf.Session (for those familiar with TensorFlow, outside of TFLite). We pass the interpreter a MappedByteBuffer containing the model. The local function loadModelFile creates a MappedByteBuffer containing the activity's graph.lite asset file.

The following lines create the input data buffer:

ImageClassifier.java

imgData = ByteBuffer.allocateDirect(
    4 * DIM_BATCH_SIZE * DIM_IMG_SIZE_X * DIM_IMG_SIZE_Y * DIM_PIXEL_SIZE);

This byte buffer is sized to contain the image data once converted to float. The interpreter can accept float arrays directly as input, but the ByteBuffer is more efficient as it avoids extra copies in the interpreter.

The following lines load the label list and create the output buffer:

labelList = loadLabelList(activity);
//...
labelProbArray = new float[1][labelList.size()];

The output buffer is a float array with one element for each label where the model will write the output probabilities.

Run the model

The second block of interest is the classifyFrame method. It takes a Bitmap as input, runs the model and returns the text to print in the app.

ImageClassifier.java

String classifyFrame(Bitmap bitmap) {
 // ...
 convertBitmapToByteBuffer(bitmap);
 // ...
 tflite.run(imgData, labelProbArray);
 // ...
 String textToShow = printTopKLabels();
 // ...
}

This method does three things. First converts and copies the input Bitmap to the imgData ByteBuffer for input to the model. Then it calls the interpreter's run method, passing the input buffer and the output array as arguments. The interpreter sets the values in the output array to the probability calculated for each class. The input and output nodes are defined by the arguments to the toco conversion step that created the .lite model file earlier.

Here are some links for more information: