android.png

TensorFlow is a multipurpose machine learning framework. TensorFlow can be used anywhere from training huge models across clusters in the cloud, to running models locally on an embedded system like your phone.

This codelab uses TensorFlow Lite to run an image recognition model on an Android device.

What you'll Learn

What you will build

A simple camera app that runs a TensorFlow image recognition program to identify flowers.

License: Free to use

This codelab will be using Colaboratory and Android Studio.

Open the Colab which shows how to train a classifier to recognize flowers using transfer learning, convert the classifier to TFLite and download the converted classifier to be used in the mobile app.

Clone the Git repository

The following command will clone the Git repository containing the files required for this codelab:

git clone https://github.com/tensorflow/examples.git

Now cd into the directory of the clone you just created. That's where you will be working for the rest of this codelab:

cd examples

android.png

Install AndroidStudio

If you don't have it installed already, go install AndroidStudio 3.0+.

Open the project with AndroidStudio

Open a project with AndroidStudio by taking the following steps:

  1. Open AndroidStudio. After it loads select " Open an existing Android Studio project" from this popup:

  1. In the file selector, choose examples/lite/examples/image_classification/android from your working directory.
  1. You will get a "Gradle Sync" popup, the first time you open the project, asking about using gradle wrapper. Click "OK".

The app can run on either a real Android device or in the Android Studio Emulator.

Set up an Android device

You can't load the app from android studio onto your phone unless you activate "developer mode" and "USB Debugging". This is a one time setup process.

Follow these instructions.

Or set up the emulator with camera access

Android studio makes setting up an emulator easy. Since this app uses the camera, you may want to setup the emulator's camera to use your computer's camera instead of the default test pattern.

To do this you need to create a new device in the "Android Virtual Device(AVD) Manager".

From the main ADVM page select "Create Virtual Device":

Then on the "Verify Configuration" page, the last page of the virtual device setup, select "Show Advanced Settings":

With the advanced settings shown, you can set both camera sources to use the host computer's webcam:

Test Build and install the app

Before making any changes to the app let's run the version that ships with the repository.

Run a Gradle sync, , and then hit play, , in Android Studio to start the build and install process.

Next you will need to select your device from this popup:

Now allow the Tensorflow Demo to access your camera and files:

Now that the app is installed, click the app icon, , to launch it. This version of the app uses the standard MobileNet, pre-trained on the 1000 ImageNet categories. It should look something like this ("Android" is not one of the available categories):

The default app setup classifies images into one of the 1000 ImageNet classes, using the standard MobileNet.

Now let's modify the app so that the app will use our retrained model for our custom image categories trained in the Colab.

Add your model files to the project

The project is configured to search for a model.tflite, and a labels.txt files in the lite/examples/image_classification/android/app/src/main/assets directory. Replace the existing labels and model with the ones you downloaded from Colab:

Modify the app's code

To make our model work with the app, we need to switch the app to use Float model instead of the quantized model. This is a 3 part change.

First, open strings.xml which lives in (Android studio menu path: app -> res -> values -> strings.xml)

examples/lite/examples/image_classification/android/app/src/main/res/values/strings.xml

Change line 15 and line 16; to

// before
<item>Quantized</item>
<item>Float</item>

// after
<item>Float</item>
<item>Quantized</item>

Second, open ClassifierFloatMobileNet.java which lives in (Android studio menu path: app -> java -> org.tensorflow.lite.examples.classification -> tflite -> ClassifierFloatMobileNet)

examples/lite/examples/image_classification/android/app/src/main/java/org/tensorflow/lite/examples/classification/tflite

Change line 56 to

// before
return "mobilenet_v1_1.0_224.tflite";

// after
return "model.tflite";

Make sure to save all the changes.

In Android Studio run a Gradle sync, , so the build system can find your files, and then hit play, , to start the build and install process as before.

It should look something like this:

License: Free to use

You can hold the power and volume-down buttons together to take a screenshot.

Now try a web search for flowers, point the camera at the computer screen, and see if those pictures are correctly classified.

Or have a friend take a picture of you and find out what kind of TensorFlower you are \uf339\uf33b\uf337!

So now that you have the app running, let's look at the TensorFlow Lite specific code.

TensorFlow-Android AAR

This app uses a pre-compiled TFLite Android Archive (AAR).

The following lines in the module's build.gradle file include the newest version of the AAR, from the TensorFlow bintray maven repository, in the project.

build.gradle

repositories {
    google()
    jcenter()
}

dependencies {
    classpath ‘com.android.tools.build:gradle:3.2.1'
   // NOTE: Do not place your application dependencies here; they belong in the individual module build.gradle files
}

We use the following block, to instruct the Android Asset Packaging Tool that .tflite assets should not be compressed. This is important as the .tflite file will be memory-mapped, and that will not work when the file is compressed.

build.gradle

android {
    aaptOptions {
        noCompress "tflite"
    }
}

Using theTFLite Java API

The code interfacing to the TFLite is all contained in Classifier.java.

Setup

The first block of interest is the constructor for the ImageClassifier:

Classifier.java

Classifier(Activity activity) throws IOException {
    tfliteModel = loadModelFile(activity);
    tflite = new Interpreter(tfliteModel, tfliteOptions);
    labels = loadLabelList(activity);
    imgData =
        ByteBuffer.allocateDirect(
            DIM_BATCH_SIZE  *
            getImageSizeX() *
            getImageSizeY() *
            DIM_PIXEL_SIZE  *
            getNumBytesPerChannel());
    imgData.order(ByteOrder.nativeOrder());
    Log.d(TAG, "Created a Tensorflow Lite Image Classifier.");
}

There are a few lines that should be discussed in more detail.

The following line creates the TFLite interpreter:

Classifier.java

tflite = new Interpreter(tfliteModel, tfliteOptions);

This line instantiates a TFLite interpreter. The interpreter does the job of a tf.function (for those familiar with TensorFlow, outside of TFLite). We pass the interpreter a MappedByteBuffer containing the model. The local function loadModelFile creates a MappedByteBuffer containing the activity's model.tflite asset file.

The following lines create the input data buffer:

Classifier.java

imgData = ByteBuffer.allocateDirect(
            DIM_BATCH_SIZE  *
            getImageSizeX() *
            getImageSizeY() *
            DIM_PIXEL_SIZE  *
            getNumBytesPerChannel()
);

This byte buffer is sized to contain the image data once converted to float. The interpreter can accept float arrays directly as input, but the ByteBuffer is more efficient as it avoids extra copies in the interpreter.

The following lines load the label list:

labels = loadLabelList(activity);

Run the model

The second block of interest is the runInference() method. It takes a Bitmap as input, runs the model and returns the text to print in the app.

ClassifierFloatMobileNet.java

tflite.run(imgData, labelProbArray);

Here are some links for more information: