Android Things makes developing connected embedded devices easy by providing the same Android development tools, best-in-class Android framework, and Google APIs that make developers successful on mobile. With the TensorFlow Lite inference library for Android, developers can easily integrate TensorFlow and machine learning into their apps on Android Things.

TensorFlow™ is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. TensorFlow has become a popular framework for training machine learning models and using those models to solve problems.

What you'll build

In this codelab, you will use the TensorFlow inference library for Android to build a device that captures images from the device camera and locally classifies them against a pre-trained ImageNet model.

What you'll learn

What you'll need

Update Android SDK

Before you begin building apps for Things, you must:

Flash Android Things

If you have not already installed Android Things on your development board, follow the official image flashing instructions for your board:

Assemble the hardware

  1. Install the Rainbow HAT on top of your developer board.
  2. Connect the camera module to the connector marked CAMERA on your board.

Connect to the device

Verify that your development computer is properly connected to your device using the adb tool:

$ adb devices
List of devices attached
1b2f21d4e1fe0129        device

The expansion connector on the development board exposes Peripheral I/O signals for application use. The Rainbow HAT sits on top of the expansion connector, providing a variety of inputs and outputs for developers to interact with.

The Peripheral I/O ports on the Rainbow HAT used in this codelab are connected to the following signals. These are also listed on the back of the Rainbow HAT:

Peripheral Device

Raspberry Pi 3

i.MX7D

'C' Button

BCM16

GPIO2_IO07

Camera Module

The developer boards are equipped with a Camera Serial Interface (CSI) connector to integrate supported camera modules with Android Things. The CSI bus is a high-speed, dedicated interface for capturing camera data. Supported camera modules connected to the developer board are accessed using the standard Android Camera APIs.

Click the following link to download the starter project for this codelab:

Download source code

...or you can clone the GitHub repository from the command line:

$ git clone https://github.com/googlecodelabs/androidthings-imageclassifier

About the project

The starter project contains the following:

This sample photo for this project is located at res/drawable/sampledog_224x224.png:

This is "Proto", an adorable Portuguese Water Dog. I know you want to spend some time looking at him, so take your time...

Import and run the starter project

Open the imageclassifier-start project in Android Studio and run it:

  1. Open Android Studio, and close any existing projects you may have opened with File → Close Project.
  2. Choose Import project from the welcome screen.
  3. Navigate to the project directory you downloaded in the previous step.
  4. Select the imageclassifier-start subdirectory.
  5. Click Open. The project will take a few moments to import and build.
  6. Select Run → Run 'app' from the menu, or click the Run icon in the toolbar.

Once the app launches on the device, look for the following line in the Android Logcat output:

Android Logcat

... D/ImageClassifierActivity: Initializing...
... D/ImageClassifierActivity: Press the button to take a picture

If you have a graphical display connected, it will display the same result message:

Tap the 'C' button on the HAT, you should see "I don't understand what I see." appear in Android Logcat:

Android Logcat

... D/ImageClassifierActivity: I don't understand what I see

If you have a graphical display connected, it will display the same result message and our sample image of Proto:

Nothing else should happen at this point. In the next steps you will be adding artificial intelligence (TensorFlow) to actually recognize the photo, and camera capturing.

Our current starter project doesn't do anything with the image. If you look at the doRecognize() method, it ignores the image parameter and reports an empty set of results, meaning that nothing was recognized. Let's wire an artificial intelligence engine here, so that the Android Things device can actually recognize what it sees.

Add the TensorFlow Lite Library

TensorFlow is an open-source library for machine learning and deep neural networks created by Google. TensorFlow Lite is TensorFlow's lightweight solution for mobile and embedded devices. It is available as a Gradle dependency on JCenter.

Add the library dependency to your app-level build.gradle file and re-sync the Gradle project.

build.gradle

dependencies {
    ...
    implementation 'org.tensorflow:tensorflow-lite:0.1.1'
}

Add the TensorFlow Lite calls

Open ImageClassifierActivity.java and make the following changes:

  1. Add code to initialize a TensorFlow Interpreter from the model file (the actual deep neural network) in assets. Read the file using the loadModelFile() helper method.
  2. Use the readLabels() helper method to pull in the list of labels that map the neural network results to names/categories recognized:

ImageClassifierActivity.java

import org.tensorflow.lite.Interpreter;

public class ImageClassifierActivity extends Activity {
    private static final String LABELS_FILE = "labels.txt";
    private static final String MODEL_FILE = "mobilenet_quant_v1_224.tflite";

    ...
 
    private Interpreter mTensorFlowLite;
    private List<String> mLabels;

    ...

    /**
     * Initialize the classifier that will be used to process images.
     */
    private void initClassifier() {
        try {
            mTensorFlowLite = new Interpreter(TensorFlowHelper.loadModelFile(this, MODEL_FILE));
            mLabels = TensorFlowHelper.readLabels(this, LABELS_FILE);
        } catch (IOException e) {
            Log.w(TAG, "Unable to initialize TensorFlow Lite.", e);
        }
    }

    /**
     * Clean up the resources used by the classifier.
     */
    private void destroyClassifier() {
        mTensorFlowLite.close();
    }

    ...
}
  1. Now let's implement the doRecognize() method. This method is called whenever the user requests an image classification. It takes the image as a parameter and runs the onPhotoRecognitionReady callback method with a list of Recognition items describing what was recognized in the input image (for example, ["beer bottle", "water bottle"]) and the confidence level of each result:

ImageClassifierActivity.java

public class ImageClassifierActivity extends Activity {
    ...
    private void doRecognize(Bitmap image) {
        // Allocate space for the inference results
        byte[][] confidencePerLabel = new byte[1][mLabels.size()];
        // Allocate buffer for image pixels.
        int[] intValues = new int[TF_INPUT_IMAGE_WIDTH * TF_INPUT_IMAGE_HEIGHT];
        ByteBuffer imgData = ByteBuffer.allocateDirect(
                DIM_BATCH_SIZE * TF_INPUT_IMAGE_WIDTH * TF_INPUT_IMAGE_HEIGHT * DIM_PIXEL_SIZE);
        imgData.order(ByteOrder.nativeOrder());

        // Read image data into buffer formatted for the TensorFlow model
        TensorFlowHelper.convertBitmapToByteBuffer(image, intValues, imgData);

        // Run inference on the network with the image bytes in imgData as input,
        // storing results on the confidencePerLabel array.
        mTensorFlowLite.run(imgData, confidencePerLabel);

        // Get the results with the highest confidence and map them to their labels
        Collection<Recognition> results = TensorFlowHelper.getBestResults(confidencePerLabel, mLabels);
        // Report the results with the highest confidence
        onPhotoRecognitionReady(results);
    }
 
    ...
}

That's it. Now, if you run your app, you should see on Android Logcat that it is recognizing something when you press the "C" button.

Android Logcat

... D/ImageClassifierActivity: Running photo recognition
... D/ImageClassifierActivity: Using sample photo in res/drawable/sampledog_224x224.png
... D/ImageRecognition: ...
... D/ImageClassifierActivity: curly-coated retriever, Border collie or English springer

If you have a graphical display connected, you should see the same output there as well:

Now we will add code to fetch an image from the board's camera, so you will be able to point the camera at objects and test the object recognition more properly.

Add Camera permission

To access the camera, you will need the proper permissions.

Add the permission declaration to your app's manifest file.

AndroidManifest.xml

<!-- TODO: ADD CAMERA SUPPORT -->
<uses-permission android:name="android.permission.CAMERA"/>

Update ImageClassifierActivity to use the camera

Let's add the code to manage the camera connection and take photos. Open ImageClassifierActivity and make the following changes:

  1. Add the variables that will hold the camera-related objects:

ImageClassifierActivity.java

...

public class ImageClassifierActivity extends Activity {

    ...
    private CameraHandler mCameraHandler;
    private ImagePreprocessor mImagePreprocessor;

    ...
}
  1. Now update the initCamera() method so that it initializes the ImagePreprocessor and CameraHandler objects.
  2. Implement an OnImageAvailableListener to be called when an image from the camera is ready. The listener should invoke ImagePreprocessor.preprocessImage() and pass the processed Bitmap to onPhotoReady(), which will reroute it to the image recognition method you implemented in the previous section.
  3. Pass the new listener to CameraHandler.initializeCamera():

ImageClassifierActivity.java

/**
 * Initialize the camera that will be used to capture images.
 */
private void initCamera() {
    mImagePreprocessor = new ImagePreprocessor(PREVIEW_IMAGE_WIDTH, PREVIEW_IMAGE_HEIGHT,
            TF_INPUT_IMAGE_WIDTH, TF_INPUT_IMAGE_HEIGHT);
    mCameraHandler = CameraHandler.getInstance();
    mCameraHandler.initializeCamera(this,
            PREVIEW_IMAGE_WIDTH, PREVIEW_IMAGE_HEIGHT, null,
            new ImageReader.OnImageAvailableListener() {
                @Override
                public void onImageAvailable(ImageReader imageReader) {
                    Bitmap bitmap = mImagePreprocessor.preprocessImage(imageReader.acquireNextImage());
                    onPhotoReady(bitmap);
                }
            });
}
  1. Finally, implement closeCamera() and loadPhoto() with code that closes the camera and triggers the CameraHandler:

ImageClassifierActivity.java

/**
 * Clean up resources used by the camera.
 */
private void closeCamera() {
    mCameraHandler.shutDown();
}

/**
 * Load the image that will be used in the classification process.
 * When done, the method {@link #onPhotoReady(Bitmap)} must be called with the image.
 */
private void loadPhoto() {
    mCameraHandler.takePicture();
}

That's it. Install the app on your board (reboot it if you are using Android Studio older than 3.0) and run. When you press the "C" button you should see several messages on logcat, something like this:

Android Logcat

... D/ImageClassifierActivity: Running photo recognition
...
... D/CameraHandler: Capture request created.
...
... D/ImageClassifierActivity: I see a window shade
... D/CameraHandler: CaptureSession closed

Congratulations! You've successfully built an image classifier using TensorFlow Lite and Android Things! Here are some things you can do to go deeper.

What we've covered