Goals

Firebase ML enables you to deploy your model over-the-air. This allows you to keep the app size small and only download the ML model when needed, experiment with multiple models, or update your ML model without having to republish the entire app.

In this codelab you will convert an iOS app using a static TFLite model into an app using a model dynamically served from Firebase. You will learn how to:

  1. Deploy TFLite models to Firebase ML and access them from your app
  2. Log model-related metrics with Analytics
  3. Select which model is loaded through Remote Config
  4. A/B test different models

Prerequisites

Before starting this codelab make sure you have installed:

Add Firebase to the project

  1. Go to the Firebase console.
  2. Select Create New Project and name your project "Firebase ML iOS Codelab".

Download the Code

Begin by cloning the sample project and running pod update in the project directory:

git clone https://github.com/FirebaseExtended/codelab-digitclassifier-ios.git
cd codelab-digitclassifier-ios
pod install --repo-update

If you don't have git installed, you can also download the sample project from its GitHub page or by clicking on this link. Once you've downloaded the project, run it in Xcode and play around with the digit classifier to get a feel for how it works.

Set up Firebase

Follow the documentation to create a new Firebase project. Once you've got your project, download your project's GoogleService-Info.plist file from Firebase console and drag it to the root of the Xcode project.

Add Firebase to your Podfile and run pod install.

pod 'Firebase/MLCommon'
pod 'FirebaseMLModelInterpreter', '0.20.0'

In your AppDelegate's didFinishLaunchingWithOptions method, import Firebase at the top of the file

import Firebase

And add a call to configure Firebase.

FirebaseApp.configure()

Run the project again to make sure the app is configured correctly and does not crash on launch.

Deploying a model to Firebase ML is useful for two main reasons:

  1. We can keep the app install size small and only download the model if needed
  2. The model can be updated regularly and with a different release cycle than the entire app

Before we can replace the static model in our app with a dynamically downloaded model from Firebase, we need to deploy it to Firebase ML. The model can be deployed either via the console, or programmatically, using the Firebase Admin SDK. In this step we will deploy via the console.

To keep things simple, we'll use the TensorFlow Lite model that's already in our app. First, open Firebase and click on Machine Learning in the left navigation panel. Then navigate to "Custom" and click on the "Add Model" button.

When prompted, give the model a descriptive name like mnist_v1 and upload the file from the codelab project directory.

Choosing when to download the remote model from Firebase into your app can be tricky since TFLite models can grow relatively large. Ideally we want to avoid loading the model immediately when the app launches, since if our model is used for only one feature and the user never uses that feature, we'll have downloaded a significant amount of data for no reason. We can also set download options such as only fetching models when connected to wifi. If you want to ensure that the model is available even without a network connection, you should also bundle the model as part of the app as a backup.

For the sake of simplicity, we'll remove the default bundled model and always download a model from Firebase when the app starts. This way when running digit recognition you can be sure that the inference is running with the model provided from Firebase.

At the top of ModelDownloader.swift, import the Firebase module.

import Firebase

Then implement the following methods.

static func downloadModel(named name: String,
                          completion: @escaping (RemoteModel?, DownloadError?) -> Void) {
  guard FirebaseApp.app() != nil else {
    completion(nil, .firebaseNotInitialized)
    return
  }
  guard success == nil && failure == nil else {
    completion(nil, .downloadInProgress)
    return
  }

  let remoteModel = CustomRemoteModel(name: name)
  let conditions = ModelDownloadConditions(allowsCellularAccess: true,
                                           allowsBackgroundDownloading: true)

  success = NotificationCenter.default.addObserver(forName: .firebaseMLModelDownloadDidSucceed,
                                                   object: nil,
                                                   queue: nil) { (notification) in
    defer { success = nil; failure = nil }
    guard let userInfo = notification.userInfo,
        let model = userInfo[ModelDownloadUserInfoKey.remoteModel.rawValue] as? RemoteModel
    else {
      completion(nil, .downloadReturnedEmptyModel)
      return
    }
    guard model.name == name else {
      completion(nil, .downloadReturnedWrongModel)
      return
    }
    completion(model, nil)
  }
  failure = NotificationCenter.default.addObserver(forName: .firebaseMLModelDownloadDidFail,
                                                   object: nil,
                                                   queue: nil) { (notification) in
    defer { success = nil; failure = nil }
    guard let userInfo = notification.userInfo,
        let error = userInfo[ModelDownloadUserInfoKey.error.rawValue] as? Error
    else {
      completion(nil, .mlkitError(underlyingError: DownloadError.unknownError))
      return
    }
    completion(nil, .mlkitError(underlyingError: error))
  }
  ModelManager.modelManager().download(remoteModel, conditions: conditions)
}

// Attempts to fetch the model from disk, downloading the model if it does not already exist.
static func fetchModel(named name: String,
                       completion: @escaping (String?, DownloadError?) -> Void) {
  let remoteModel = CustomRemoteModel(name: name)
  if ModelManager.modelManager().isModelDownloaded(remoteModel) {
    ModelManager.modelManager().getLatestModelFilePath(remoteModel) { (path, error) in
      completion(path, error.map { DownloadError.mlkitError(underlyingError: $0) })
    }
  } else {
    downloadModel(named: name) { (model, error) in
      guard let model = model else {
        let underlyingError = error ?? DownloadError.unknownError
        let compositeError = DownloadError.mlkitError(underlyingError: underlyingError)
        completion(nil, compositeError)
        return
      }
      ModelManager.modelManager().getLatestModelFilePath(model) { (path, pathError) in
        completion(path, error.map { DownloadError.mlkitError(underlyingError: $0) })
      }
    }
  }
}

In ViewController.swift's viewDidLoad, replace the DigitClassifier initialization call with our new model download method.

// Download the model from Firebase
print("Fetching model...")
ModelDownloader.fetchModel(named: "mnist_v1") { (filePath, error) in
  guard let path = filePath else {
    if let error = error {
      print(error)
    }
    return
  }
  print("Model download complete")

  // Initialize a DigitClassifier instance
  DigitClassifier.newInstance(modelPath: path) { result in
    switch result {
    case let .success(classifier):
      self.classifier = classifier
    case .error(_):
      self.resultLabel.text = "Failed to initialize."
    }
  }
}

Re-run your app. After a few seconds, you should see a log in Xcode indicating the remote model has successfully downloaded. Try drawing a digit and confirm the behavior of the app has not changed.

We will measure accuracy of the model by tracking user feedback on model predictions. If a user clicks "yes", it will indicate that the prediction was an accurate one.

We can log an Analytics event to track the accuracy of our model. First, we must add Analytics to the Podfile before it can be used in the project:

pod 'Firebase/Analytics'

Then in ViewController.swift import Firebase at the top of the file

import Firebase

And add the following line of code in the correctButtonPressed method.

Analytics.logEvent("correct_inference", parameters: nil)

Run the app again and draw a digit. Press the "Yes" button a couple of times to send feedback that the inference was accurate.

Debug analytics

Generally, events logged by your app are batched together over the period of approximately one hour and uploaded together. This approach conserves the battery on end users' devices and reduces network data usage. However, for the purposes of validating your analytics implementation (and, in order to view your analytics in the DebugView report), you can enable Debug mode on your development device to upload events with a minimal delay.

To enable Analytics Debug mode on your development device, specify the following command line argument in Xcode:

-FIRDebugEnabled

Run the app again and draw a digit. Press the "Yes" button a couple of times to send feedback that the inference was accurate. Now you can view the log events in near real time via the debug view in the Firebase console. Click on Analytics > DebugView from the left navigation bar.

When coming up with a new version of your model, such as one with a better model architecture or one trained on a larger or updated dataset, we may feel tempted to replace our current model with the new version. However, a model performing well in testing does not necessarily perform equally well in production. Therefore, let's do A/B testing in production to compare our original model and the new one.

Enable Firebase Model Management API

In this step, we will enable the Firebase Model Management API to deploy a new version of our TensorFlow Lite model using Python code.

Create a bucket to store your ML models

In your Firebase Console, go to Storage and click Get started.

Follow the dialogue to get your bucket set up.

Enable Firebase ML API

Go to Firebase ML API page on Google Cloud Console and click Enable.

Select the Digit Classifier app when asked.

Now we will train a new version of the model by using a larger dataset, and we will then deploy it programmatically directly from the training notebook using the Firebase Admin SDK.

Download the private key for service account

Before we can use the Firebase Admin SDK, we'll need to create a service account. Open the Service Accounts panel of Firebase console by clicking this link and click on the button to create a new service account for the Firebase Admin SDK. When prompted, click the Generate New Private Key button. We'll use the service account key for authenticating our requests from the colab notebook.

Now we can train and deploy the new model.

  1. Open this colab notebook and make a copy of it under your own Drive.
  2. Run the first cell "Train an improved TensorFlow Lite model" by clicking on the play button to the left of it. This will train a new model and may take some time.
  3. Running the second cell will create a file upload prompt. Upload the json file you downloaded from Firebase Console when creating your service account.

  1. Run the last two cells.

After running the colab notebook, you should see a second model in Firebase console. Make sure the second model is named mnist_v2.

Now that we have two separate models, we'll add a parameter for selecting which model to download at runtime. The value of the parameter the client receives will determine which model the client downloads. First, open up the Firebase console and click on the Remote Config button in the left nav menu. Then, click on the "Add Parameter" button.

Name the new parameter model_name and give it a default value of mnist_v1. Click Publish Changes to apply the updates. By putting the name of the model in the remote config parameter, we can test multiple models without adding a new parameter for every model we want to test.

After adding the parameter, you should see it in Console:

In our code, we'll need to add a check when loading the remote model. When we receive the parameter from Remote Config, we'll fetch the remote model with the corresponding name; otherwise we'll attempt to load mnist_v1. Before we can use Remote Config, we have to add it to our project by specifying it as a dependency in the Podfile:

pod 'Firebase/RemoteConfig'

Run pod install and re-open the Xcode project. In ModelDownloader.swift, implement the fetchParameterizedModel method.

static func fetchParameterizedModel(completion: @escaping (String?, DownloadError?) -> Void) {
  RemoteConfig.remoteConfig().fetchAndActivate { (status, error) in
    DispatchQueue.main.async {
      if let error = error {
        let compositeError = DownloadError.mlkitError(underlyingError: error)
        completion(nil, compositeError)
        return
      }

      let modelName: String
      if let name = RemoteConfig.remoteConfig().configValue(forKey: "model_name").stringValue {
        modelName = name
      } else {
        let defaultName = "mnist_v1"
        print("Unable to fetch model name from config, falling back to default \(defaultName)")
        modelName = defaultName
      }

      fetchModel(named: modelName, completion: completion)
    }
  }
}

Finally, in ViewController.swift, replace the fetchModel call with the new method we just implemented.

// Download the model from Firebase
print("Fetching model...")
ModelDownloader.fetchParameterizedModel { (filePath, error) in
  guard let path = filePath else {
    if let error = error {
      print(error)
    }
    return
  }
  print("Model download complete")

  // Initialize a DigitClassifier instance
  DigitClassifier.newInstance(modelPath: path) { result in
    switch result {
    case let .success(classifier):
      self.classifier = classifier
    case .error(_):
      self.resultLabel.text = "Failed to initialize."
    }
  }
}

Re-run the app and make sure it still loads the model correctly.

Finally, we can use Firebase's built-in A/B Testing behavior to see which of our two models is performing better. Go to Analytics -> Events in the Firebase console. If the correct_inference event is showing, mark it as a "Conversion event", if not, you can go to Analytics -> Conversion Events and click "Create a New Conversion Event" and put down correct_inference.

Now go to "Remote Config in the Firebase console, select the "A/B test" button from the more options menu on the "model_name" parameter we just added.

In the menu that follows, accept the default name.

Select your app on the dropdown and change the targeting criteria to 50% of active users.

If you were able to set the correct_inference event as a conversion earlier, use this event as the primary metric to track. Otherwise, if you don't want to wait for the event to show up in Analytics, you can add correct_inference manually.

Finally, on the Variants screen, set your control group variant to use mnist_v1 and your Variant A group to use mnist_v2.

Click the Review button in the bottom right corner.

Congratulations, you've successfully created an A/B test for your two separate models! The A/B test is currently in a draft state and can be started at any time by clicking the "Start Experiment" button.

For a closer look at A/B testing, check out the A/B Testing documentation.

In this codelab, you learned how to replace a statically-bundled tflite asset in your app with a dynamically loaded TFLite model from Firebase. To learn more about TFLite and Firebase, take a look at other TFLite samples and the Firebase getting started guides.