Machine learning has become an important toolset in mobile development, enabling many smart capabilities in modern mobile apps. If you are a mobile developer who is new to machine learning and want a quick introduction about the machine learning techniques that you can integrate to your mobile app, this is the codelab for you!

In this codelab, you will experience the end-to-end process of training a machine learning model that can recognize handwritten digit images with TensorFlow and deploy it to an Android app.

After finishing the codelab, we will have a working Android app that can recognize handwritten digits that you write.

What is TensorFlow?

TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries and community resources that lets researchers push the state-of-the-art in machine learning and developers easily build and deploy machine learning powered applications.

TensorFlow Lite is a product in the TensorFlow ecosystem to help developers run TensorFlow models on mobile, embedded, and IoT devices. It enables on-device machine learning inference with low latency and a small binary size.

What you'll learn

What you'll need

We will start by using TensorFlow to define and train our machine learning model that can recognize handwritten digits, or in machine learning term a digit classifier model. Next, we will convert the trained TensorFlow model to TensorFlow Lite to get ready for deployment.

This step is presented as a Python notebook that you can open in Google Colab.

Open in Colab

After finishing this step, you will have a TensorFlow Lite digit classifier model that is ready for deployment to a mobile app.

Download the Android skeleton app

Download a zip archive that contains the source code of the Android app used in this codelab. Extract the archive in your local machine.

Download ZIP

Import the app to Android Studio

  1. Open Android Studio.
  2. Click Import project (Gradle, Eclipse ADT, etc.).
  3. Choose the lite/codelabs/digit_classifier/android/start/ folder from where you extracted the archive to.
  4. Wait for the import processes to finish.

Add the TensorFlow Lite model to assets folder

Update build.gradle

  1. Go to build.gradle of the app module and find this block.
dependencies {
  ...
  // TODO: Add TF Lite
  ...
}
  1. Add TensorFlow Lite to the app's dependencies.
implementation 'org.tensorflow:tensorflow-lite:1.14.0'
  1. Then find this code block.
android {
  ...
  // TODO: Add an option to avoid compressing TF Lite model file
  ...
}
  1. Add this line to prevent Android from compressing TensorFlow Lite model files when generating the app binary. If you forget to add this option, the model will be broken.
aaptOptions {
  noCompress "tflite"
}
  1. Click Sync Now to apply the changes.

org.tensorflow.lite.Interpreter is the Java class that allows you to run your TensorFlow Lite model in your Android app. We will start by initializing an Interpreter instance with our model.

  1. Open DigitClassifier.kt. This is where we will add TensorFlow Lite code.
  2. First, add a field to the DigitClassifier class.
class DigitClassifier(private val context: Context) {
  private var interpreter: Interpreter? = null
  ...
}
  1. Android Studio now raises an error: Unresolved reference: Interpreter . Follow its suggestion and import org.tensorflow.lite.Interpreter to fix the error.
  2. Next, find this code block.
private fun initializeInterpreter() {
    // TODO: Load the TF Lite model from file and initialize an interpreter.
    ...
}
  1. Then add these lines to initialize a TensorFlow Lite interpreter instance using the mnist.tflite model from the assets folder.
// Load the TF Lite model from asset folder.
val assetManager = context.assets
val model = loadModelFile(assetManager, "mnist.tflite")

// Initialize TF Lite Interpreter with NNAPI enabled.
val options = Interpreter.Options()
options.setUseNNAPI(true)
val interpreter = Interpreter(model, options)
  1. Add these lines right below to read the model input shape from the model.
// Read input shape from model file
val inputShape = interpreter.getInputTensor(0).shape()
inputImageWidth = inputShape[1]
inputImageHeight = inputShape[2]
modelInputSize = FLOAT_TYPE_SIZE * inputImageWidth * inputImageHeight * PIXEL_SIZE

// Finish interpreter initialization
this.interpreter = interpreter
  1. After we have finished using the TensorFlow Lite interpreter, we should close it to free up resources. In this sample, we synchronize our the interpreter lifecycle to the activity MainActivity lifecycle, and we will close the interpreter when the activity is going to be destroyed. Let's find this comment block in DigitClassifier#close() method.
// TODO: close the TF Lite interpreter here
  1. Then add this line.
interpreter?.close()

Our TensorFlow Lite interpreter is set up, so let's write code to recognize the digit in the input image. We will need to:

Let's write some code.

  1. Find this code block in DigitClassifier.kt.
private fun classify(bitmap: Bitmap): String {
  ...
  // TODO: Add code to run inference with TF Lite.
  ...
}
  1. Add code to convert the input Bitmap instance to a ByteBuffer instance to feed to the model.
// Preprocessing: resize the input image to match with model input shape.
val resizedImage = Bitmap.createScaledBitmap(
  bitmap,
  inputImageWidth,
  inputImageHeight,
  true
)
val byteBuffer = convertBitmapToByteBuffer(resizedImage)
  1. Then run inference with the preprocessed input.
// Define an array to store the model output.
val output = Array(1) { FloatArray(OUTPUT_CLASSES_COUNT) }

// Run inference with the input data.
interpreter?.run(byteBuffer, output)
  1. Then identify the digit with the highest probability from the model output, and return a human readable string that contains the prediction result and confidence.
// Post-processing: find the digit that has highest probability
// and return it a human-readable string.
val result = output[0]
val maxIndex = result.indices.maxBy { result[it] } ?: -1
val resultString = "Prediction Result: %d\nConfidence: %2f".
  format(maxIndex, result[maxIndex])

return resultString

Let's deploy the app to an Android Emulator or a physical Android device to test it.

  1. Click Run () in the Android Studio toolbar to run the app.
  2. Draw any digit to the drawing pad and see if the app can recognize it.

You have gone through an end-to-end journey of training an digit classification model on MNIST dataset using TensorFlow, and deploy the model in a mobile app with TensorFlow Lite.

Next steps