1. Before you begin
In the first Codelab in this series, you created a very simple app that used Image Labelling to parse the contents of an image. You passed it a picture of a daisy, and it gave you back that it saw things like a petal or the sky. Then, in the second Codelab you switched to Python to train a new, custom, model that recognizes five different types of flower.
In this codelab you'll update the app from the first lab with the model from the second!
You can get the full source code for this code lab by cloning this repo. You'll see sub directories for Android and iOS. The previous codelab's code is available as ImageClassifierStep1 if you want to follow along. The finished code for this codelab is available as ImageClassifierStep2.
Prerequisites
- You should have completed the first two codelabs in this learning path
What you'll build and learn
- Integrate a custom model, trained in the previous lab, into an Android or iOS app
What you'll need
- Android Studio, available at developer.android.com/studio for the Android part of the lab
- Xcode, available in the Apple App Store, for the iOS part of the lab
2. Get the Starter App
First you'll need the app from the Build your first Computer Vision App on Android or iOS Codelab. If you have gone through the lab, it will be called ImageClassifierStep1. If you don't want to go through the lab, you can clone the finished version from the repo
Open it in Android Studio, do whatever updates you need, and when it's ready run the app to be sure it works. You should see something like this:
It's quite a primitive app, but it shows some very powerful functionality with just a little code. However, if you want this flower to be recognized as a daisy, and not just as a flower, you'll have to update the app to use your custom model from the Create a custom model for your image classifier codelab.
3. Update build.gradle to use Custom ML Kit Models
- Using Android Studio, find the app-level
build.gradle
file. The easiest way to do this is in the project explorer. Make sure Android is selected at the top, and you'll see a folder for Gradle Scripts at the bottom. - Open the one that is for the Module, with your app name followed by ‘.app' as shown here – (Module: ImageClassifierStep1.app):
- At the bottom of the file, find the dependencies setting. In there you should see this line:
implementation 'com.google.mlkit:image-labeling:17.0.1'
The version number might be different. Always find the latest version number from the ML Kit site at: https://developers.google.com/ml-kit/vision/image-labeling/android
- Replace this with the custom image labeling library reference. The version number for this can be found at: https://developers.google.com/ml-kit/vision/image-labeling/custom-models/android
implementation 'com.google.mlkit:image-labeling-custom:16.3.1'
- Additionally, you'll be adding a .tflite model that you created in the previous lab. You don't want this model to be compressed when Android Studio compiles your app, so make sure you use this setting in the Android section of the same
build.gradle
file:
aaptOptions{
noCompress "tflite"
}
Make sure it's not within any other setting. It should be nested directly under the android
tag. Here's an example:
4. Add the TFLite Model
In the previous codelab you created your custom model and downloaded it as model.tflite.
In your project, find your assets folder that currently contains flower1.jpg
. Copy the model to that folder as follows:
- Right-click the Assets folder in Android Studio. In the menu that opens, select Reveal in Finder. (‘Show in Explorer' on Windows, and ‘Show in Files' on Linux.)
- You'll be taken to the directory on the file system. Copy the
model.tflite
file into that directory, alongsideflower1.jpg.
Android Studio will update to show both files in your assets folder:
You're now ready to update your code.
5. Update your code for the custom model
The first step will be to add some code to load the custom model.
- In your
MainActivity
file, add the following to youronCreate
, immediately below the line that readssetContentView(R.layout.activity_main)
.
This will use a LocalModel to build from the model.tflite asset. If Android Studio complains by turning ‘LocalModel' red, press ALT + Enter to import the library. It should add an import to com.google.mlkit.common.model.LocalModel for you.
val localModel = LocalModel.Builder()
.setAssetFilePath("model.tflite")
.build()
Previously, in your btn.setOnClickListener
handler you were using the default model. It was set up with this code:
val labeler = ImageLabeling.getClient(ImageLabelerOptions.DEFAULT_OPTIONS)
You'll replace that to use the custom model.
- Set up a custom options object:
val options = CustomImageLabelerOptions.Builder(localModel)
.setConfidenceThreshold(0.7f)
.setMaxResultCount(5)
.build()
This replaces the default options with a customized set. The confidence threshold sets a bar for the quality of predictions to return. If you look back to the sample at the top of this codelab, where the image was a daisy, you had 4 predictions, each with a value beside them, such as ‘Sky' being .7632.
You could effectively filter out lower quality results by using a high confidence threshold. Setting this to 0.9 for example wouldn't return any label with a priority lower than that. The setMaxResultCount()
is useful in models with a lot of classes, but as this model only has 5, you'll just leave it at 5.
Now that you have options for the labeler, you can change the instantiation of the labeler to:
val labeler = ImageLabeling.getClient(options)
The rest of your code will run without modification. Give it a try!
Here you can see that this flower was now identified as a daisy with a .959 probability!
Let's say you added a second flower image, and reran with that:
It identifies it as a rose.
You might wonder why it says roses instead of just "rose". That's because in the dataset, the labels are given by the folder names, and unfortunately those folder names are a little inconsistent, sometimes using singular (like ‘daisy') and sometimes using plural (like ‘roses'). Don't confuse this with the model attempting to count the items in the image – it's much more primitive than that, and can only identify the flower types!
6. Get the iOS Start App
- First you'll need the app from the first Codelab. If you have gone through the lab, it will be called ImageClassifierStep1. If you don't want to go through the lab, you can clone the finished version from the repo. Please note that the pods and .xcworkspace aren't present in the repo, so before continuing to the next step be sure to run ‘pod install' from the same directory as the .xcproject.
- Open
ImageClassifierStep1.xcworkspace
in Xcode. Note that you should use the .xcworkspace and not the .xcproject because you have bundled ML Kit using pods, and the workspace will load these.
For the rest of this lab, I'll be running the app in the iPhone simulator which should support the build targets from the codelab. If you want to use your own device, you might need to change the build target in your project settings to match your iOS version.
Run it and you'll see something like this:
Note the very generic classifications – petal, flower, sky. The model you created in the previous codelab was trained to detect 5 varieties of flower, including this one – a daisy.
For the rest of this codelab, you'll look at what it will take to upgrade your app with the custom model.
7. Use Custom ML Kit Image Labeler Pods
The first app used a pod file to get the base ML Kit Image Labeler libraries and model. You'll need to update that to use the custom image labelling libraries.
- Find the file called
podfile
in your project directory. Open it, and you'll see something like this:
platform :ios, '10.0'
target 'ImageClassifierStep1' do
pod 'GoogleMLKit/ImageLabeling'
end
- Change the pod declaration from
ImageLabeling
toImageLabelingCustom
, like this:
platform :ios, '10.0'
target 'ImageClassifierStep1' do
pod 'GoogleMLKit/ImageLabelingCustom'
end
- Once you're done, use the terminal to navigate to the directory containing the podfile (as well as the .xcworkspace) and run
pod install
.
After a few moments the MLKitImageLabeling libraries will be removed, and the custom ones added. You can now open your .xcworkspace to edit your code.
8. Add the TFLite Model to Xcode
In the previous codelab you created a custom model, and downloaded it as model.tflite. If you don't have this on-hand, go back and run that codelab, or go through the colab code here. If you don't have access to Google Colab, the notebook is available at this link
- With the workspace open in Xcode, drag the model.tflite onto your project. It should be in the same folder as the rest of your files such as
ViewController.swift
orMain.storyboard
. - A dialog will pop up with options for adding the file. Ensure that Add to Targets is selected, or the model won't be bundled with the app when it's deployed to a device.
Note that the ‘Add to Targets' entry will have ImageClassifierStep1 if you started from that and are continuing through this lab step-by-step or ImageClassifierStep2 (as shown) if you jumped ahead to the finished code.
This will ensure that you can load the model. You'll see how to do that in the next step.
9. Update your Code for the Custom Model
- Open your
ViewController.swift
file. You may see an error on the ‘import MLKitImageLabeling' at the top of the file. This is because you removed the generic image labeling libraries when you updated your pod file. Feel free to delete this line, and update with the following:
import MLKitVision
import MLKit
import MLKitImageLabelingCommon
import MLKitImageLabelingCustom
It might be easy to speed read these and think that they're repeating the same code! But it's "Common" and "Custom" at the end!
- Next you'll load the custom model that you added in the previous step. Find the
getLabels()
func. Beneath the line that readsvisionImage.orientation = image.imageOrientation
, add these lines:
// Add this code to use a custom model
let localModelFilePath = Bundle.main.path(forResource: "model", ofType: "tflite")
let localModel = LocalModel(path: localModelFilePath!)
- Find the code for specifying the options for the generic ImageLabeler. It's probably giving you an error since those libraries were removed:
let options = ImageLabelerOptions()
Replace that with this code, to use a CustomImageLabelerOptions
, and which specifies the local model:
let options = CustomImageLabelerOptions(localModel: localModel)
...and that's it! Try running your app now! When you try to classify the image it should be more accurate – and tell you that you're looking at a daisy with high probability!
Let's say you added a second flower image, and reran with that:
The app successfully detected that this image matched the label ‘roses'!
10. Congratulations!
You've now gone from building an app that used a generic model to recognize the contents of an image, to creating your own ML model to recognize specific things, such as flowers, and then updating your app to use your custom model.
The resulting app is, of course, very limited because it relied on bundled image assets. However, the ML part is working nicely. You could, for example, use AndroidX Camera to take frames from a live feed and classify them to see what flowers your phone recognizes!
From here the possibilities are endless – and if you have your own data for something other than flowers, you have the foundations of what you need to build an app that recognizes them using Computer Vision. These are just the first few steps into a much broader world, and hopefully you've enjoyed working through them!