Use Coral Edge TPUs to run TFlite models in Node with TensorFlow.js

1. Introduction

54e81d02971f53e8.png

Last Updated: 2022-04-11

In this Codelab, you learn how to train an image classification model using Teachable Machine, and run it with Coral hardware acceleration using TensorFlow.js, a powerful and flexible machine learning library for JavaScript. You build an Electron app that displays images from a webcam and classifies them using a Coral edge TPU. A fully working version of this Codelab is available in the sig-tfjs GitHub repo.

Do I need a Coral Device?

No. You can try this codelab without a Coral device and still get good performance on a desktop machine by using the WebNN accelerator instead.

What you'll build

In this codelab, you build an Electron app that classifies images. Your app:

  • Classifies images from the webcam into the categories defined in the model you've trained.
  • Uses a Coral accelerator to increase performance, if one is available.
  • Uses WebNN to increase performance, if it's supported on your platform.

What you'll learn

  • How to install and set up the tfjs-tflite-node NPM package to run TFLite models in Node.js.
  • How to install the Edge TPU runtime library to run models on a Coral device.
  • How to accelerate model inference using a Coral edge TPU.
  • How to accelerate model inference with WebNN.

This codelab focuses on TFLite in Node.js. Non-relevant concepts and code-blocks are glossed over and are provided for you to simply copy and paste.

What you'll need

To complete this Codelab, you need:

2. Get set up

Get the code

We've put all the code you need for this project into a Git repo. To get started, grab the code and open it in your favorite dev environment. For this codelab, we recommend using a Raspberry Pi running Raspberry Pi OS (64-bit) with desktop. This makes it easy to connect a Coral accelerator.

Strongly Recommended: Use Git to clone the repo on a Raspberry Pi

To get the code, open a new terminal window and clone the repo:

git clone https://github.com/tensorflow/sig-tfjs.git

All the files you need to edit for the codelab are in the tfjs-tflite-node-codelab directory (inside sig-tfjs). In this directory, you'll find subdirectories named starter_code, cpu_inference_working, coral_inference_working, and webnn_inference_working. These are checkpoints for the steps of this codelab.

Among the other files in the repository are the NPM packages that tfjs-tflite-node-codelab depends on. You won't need to edit any of these files, but you'll need to run some of their tests to make sure that your environment is set up correctly.

Install the Edge TPU runtime library

Coral devices require that you install the Edge TPU runtime library prior to use. Install it by following the instructions for your platform.

On Linux / Raspberry Pi

On Linux, the library is available from Google's PPA as a Debian package, libedgetpu1-std, for x86-64 and Armv8 (64-bit) architectures. If your processor uses a different architecture, you will need to compile it from source.

Run this command to add Google's Coral PPA and install the Edge TPU Runtime library.

# None of this is needed on Coral boards
# This repo is needed for almost all packages below
echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list
# This repo is needed for only python3-coral-cloudiot and python3-coral-enviro
echo "deb https://packages.cloud.google.com/apt coral-cloud-stable main" | sudo tee /etc/apt/sources.list.d/coral-cloud.list

curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

sudo apt-get update
sudo apt-get install libedgetpu1-std

On Windows / Other OSes

Pre-compiled binaries are available for x86-64 versions of MacOS and Windows and can be installed by running the install.sh or install.bat script in the archive once downloaded.

Restart your Device

Once the Edge TPU Runtime is installed, restart the device to activate the new Coral Udev rule the installer added.

Verify that your Coral device is detected

To verify that your Coral device is detected and working, run the integration tests for the coral-tflite-delegate package. This package is found in the root directory of the repository. To run the integration tests, plug in your Coral accelerator and run these commands in the package's directory:

npx yarn
npx yarn build-deps
npx yarn test-integration

You should see an output like this:

yarn run v1.22.17
$ yarn build && yarn test-integration-dev
$ tsc
$ jasmine --config=jasmine-integration.json
Platform node has already been set. Overwriting the platform with node.
Randomized with seed 78904
Started

============================
Hi there 👋. Looks like you are running TensorFlow.js in Node.js. To speed things up dramatically, install our node backend, which binds to TensorFlow C++, by running npm i @tensorflow/tfjs-node, or npm i @tensorflow/tfjs-node-gpu if you have CUDA. Then call require('@tensorflow/tfjs-node'); (-gpu suffix for CUDA) at the start of your program. Visit https://github.com/tensorflow/tfjs-node for more details.
============================
WARNING: converting 'int32' to 'uint8'
.


1 spec, 0 failures
Finished in 2.777 seconds
Randomized with seed 78904 (jasmine --random=true --seed=78904)
Done in 6.36s.

Don't worry about installing @tensorflow/tfjs-node,as mentioned in the logs, since you'll be running the model in TFLite.

If instead the output contains Encountered unresolved custom op: edgetpu-custom-op, then your Coral device was not detected. Make sure you've installed the edge TPU runtime library and plugged in the Coral device to your computer. You can also follow Coral's Getting Started guide to test the Python version of Coral bindings. If the Python version works but these tests still fail, please let us know by filing a bug report.

Run the starter code

Now you're ready to run the starter code. Follow these steps to get started.

  1. Move to the starter_code directory under the tfjs-tflite-node-codelab directory.
  2. run npm install to install dependencies.
  3. Run npm start to launch the project. An app showing a video feed from your computer's webcam should open.

What's our starting point?

Our starting point is a basic Electron camera app designed for this codelab. The code has been simplified to show the concepts in the codelab, and it has little error handling. If you choose to reuse any of the code in a production app, make sure that you handle any errors and fully test all code.

A basic electron app with a live feed of the device's camera.

Explore the starter code

There are a lot of files in this starter code, but the only one you need to edit is renderer.js. It controls what shows up on the page, including the video feed and HTML elements, and it's where you add your machine learning model to the app. Among the other files is an index.html file, but all it does is load the renderer.js file. There's also a main.js file, which is the entry point for Electron. It controls the lifecycle of the app, including what to show when it's opened and what to do when it's closed, but you won't need to make any changes to it.

Open the debugger

You might need to debug your app as you follow this codelab. Since this app is based on Electron, it has the Chrome debugger built in. On most platforms, you can open it with Ctrl + Shift + i. Click the Console tab to see logs and error messages from the app.

There's not much else to explore here, so let's get right into training the image classifier!

3. Train an Image Classifier

In this section, you train TFLite and Coral versions of a custom image classification model.

Train the Classifier

An Image Classifier takes input images and assigns labels to them. For this codelab, you use Teachable Machine to train a model in your browser. To speed up training for this section, you can use a desktop or laptop computer instead of a Raspberry Pi, but you'll have to copy the resulting files to the Pi.

Now you're ready to train a model. If you're not sure what kind of model to train, an easy model to train is a person detector, which just detects if a person is in frame.

  1. Open the Teachable Machine training page in a new tab.
  2. Select Image Project and then Standard image model.
  3. Add image samples for each class. Using the webcam input is the easiest way to do this. You can also rename the classes.
  4. When you have collected enough data for each class (50 samples is usually enough), press Train Model.

When the model is finished training, you should see a preview of the model's output.

A model is trained on images from two classes,

Try giving the model different inputs. If you find an input that is incorrectly classified, add it to the training data and re-train the model.

  1. When you're satisfied with the model's accuracy, click Export Model. You'll need to download two separate versions of the model.
  2. Export your model as a Tensorflow Lite Floating point model. This downloads a file called converted_tflite.zip. that runs on the CPU.
  3. Export your model as a Tensorflow Lite EdgeTPU model. This downloads a file called converted_edgetpu.zip that runs on the Coral Edge TPU.

4. Run the CPU model in your app

Now that you have trained a model, it's time to add it to your app. By the end of this section, the app will be able to run your model using the device's CPU.

Add the model file to the app

Unzip the converted_tflite.zip model file you downloaded when you trained the classifier. There are two files in the archive. model_uquant.tflite is the saved TFLite model, including the model graph and weights. labels.txt contains the human-readable labels for the classes that the model predicts. Place both files in the modeldirectory.

Install dependencies

Loading a model and preprocessing inputs requires a few dependencies from TensorFlow.js:

  • tfjs-tflite-node: TensorFlow.js's package for running TFLite models in Node.js.
  • @tensorflow/tfjs: TensorFlow.js's main package.

@tensorflow/tfjs is already installed, but you need to install tfjs-tflite-node with this command:

npm install --save tfjs-tflite-node

Once it's installed, add it to the app at the top of renderer.js:

CODELAB part 1: Import tfjs-tflite-node.

const {loadTFLiteModel} = require('tfjs-tflite-node');

Load the model

Now you're ready to load the model. tfjs-tflite-node provides the loadTFLiteModel function to do this. It can load models from a file path, an ArrayBuffer, or a TFHub URL. To load your model and its weights, add this to the main function:

CODELAB part 1: Load the model here.

const modelPath = './model/model_unquant.tflite';
const model = await loadTFLiteModel(modelPath);
const labels = fs.readFileSync('./model/labels.txt', 'utf8')
      .split('\n');

Run the model

Running your model takes three steps. First, you pull and preprocess an input frame from the webcam. Then, you run the model on that frame and get a prediction. After that, you display the prediction on the page.

Preprocess the webcam input

Right now, the webcam is just an HTML element, and the frames it displays are not available to the JavaScript renderer.js file. To pull frames from the webcam, TensorFlow.js provides tf.data.webcam, which provides an easy-to-use capture() method to capture frames from the camera.

To use it, add this setup code to main():

CODELAB part 1: Set up tf.data.webcam here.

const tensorCam = await tf.data.webcam(webcam);

Then, to capture an image every frame, add the following to run():

CODELAB part 1: Capture webcam frames here.

const image = await tensorCam.capture();

You also need to preprocess each frame to be compatible with the model. The model this codelab uses has input shape [1, 224, 224, 3], so it expects a 224 by 224 pixel RGB image. tensorCam.capture()gives a shape of [224, 224, 3], so you need to add an extra dimension at the front of the tensor with tf.expandDims. Additionally, the CPU model expects a Float32 input between -1 and 1, but the webcam captures values from 0 to 255. You can divide the input tensor by 127 to change its range from [0, 255] to [0, ~2] and then subtract 1 to get the desired range [-1, ~1]. Add these lines to tf.tidy() in the run() function to do this:

CODELAB part 1: Preprocess webcam frames here.

const expanded = tf.expandDims(image, 0);
const divided = tf.div(expanded, tf.scalar(127));
const normalized = tf.sub(divided, tf.scalar(1));

It's important to dispose of tensors after using them. tf.tidy() does this automatically for the code contained in its callback, but it does not support async functions. You'll need to manually dispose of the image tensor you created earlier by calling its dispose() method.

CODELAB part 1: Dispose webcam frames here.

image.dispose();

Run the model and display results

To run the model on the preprocessed input, call model.predict() on the normalized tensor. This returns a one-dimensional tensor containing the predicted probability of each label. Multiply this probability by 100 to get the percentage chance of each label and use the showPrediction function included with the starter code to show the model's prediction on the screen.

This code also uses stats.js to time how long prediction takes by placing calls to stats.begin and stats.end around model.predict.

CODELAB part 1: Run the model and display the results here.

stats.begin();
const prediction = model.predict(normalized);
stats.end();
const percentage = tf.mul(prediction, tf.scalar(100));
showPrediction(percentage.dataSync(), labels);

Run the app again with yarn start, and you should see classifications from your model.

The TFLite CPU model runs in the Electron app. It classifies images from the webcam and displays confidence values for each class below.

Performance

As it's currently set up, the model runs on the CPU. This is fine for desktop computers and most laptops, but might not be desirable if you run it on a Raspberry Pi or another low power device. On a Raspberry Pi 4, you'll probably see around 10 FPS, which might not be fast enough for some applications. To get better performance without using a faster machine you can use application-specific silicon in the form of a Coral Edge TPU.

5. Run the Coral model in your app

If you don't have a Coral device, you can skip this section.

This step of the codelab builds off of the code you wrote in the last section, but you can use the cpu_inference_working checkpoint instead if you want to start with a clean slate.

The steps for running the Coral model are nearly identical to the steps for running the CPU model. The main difference is the model format. Since Coral only supports uint8 tensors, the model is quantized. This affects the input tensors passed to the model and the output tensors it returns. Another difference is that models need to be compiled using the Edge TPU compiler to run on a Coral TPU. TeachableMachine has already done this step, but you can learn how to do this for other models by visiting the Coral documentation.

Add the Coral model file to the app

Unzip the converted_edgetpu.zip model file you downloaded when you trained the classifier. There are two files included in the archive. model_edgetpu.tflite is the saved TFLite model, including the model graph and weights. labels.txt contains the human-readable labels for the classes that the model predicts. Place the model file in the coral_model directory.

Install dependencies

Running Coral models requires the Edge TPU runtime library. Make sure you've installed it by following the setup instructions before continuing.

Coral devices are accessed as TFLite delegates. To access them from JavaScript, install the coral-tflite-delegate package:

npm install --save coral-tflite-delegate

Then, import the delegate by adding this line to the top of the renderer.js file:

CODELAB part 2: Import the delegate here.

const {CoralDelegate} = require('coral-tflite-delegate');

Load the model

Now you're ready to load the Coral model. You do this in the same way as for the CPU model, except now you pass options to the loadTFLiteModel function to load the Coral delegate.

CODELAB part 2: Load the delegate model here.

const coralModelPath = './coral_model/model_edgetpu.tflite';
const options = {delegates: [new CoralDelegate()]};
const coralModel = await loadTFLiteModel(coralModelPath, options);

You don't need to load the labels because they are the same as for the CPU model.

Add a button to switch between CPU and Coral

You add the Coral model alongside the CPU model you added in the last section. Running them both at the same time makes it hard to see performance differences, so a toggle button switches between Coral and CPU execution.

Add the button with this code:

CODELAB part 2: Create the delegate button here.

let useCoralDelegate = false;
const toggleCoralButton = document.createElement('button');
function toggleCoral() {
  useCoralDelegate = !useCoralDelegate;
  toggleCoralButton.innerText = useCoralDelegate
      ? 'Using Coral. Press to switch to CPU.'
      : 'Using CPU. Press to switch to Coral.';
}
toggleCoralButton.addEventListener('click', toggleCoral);
toggleCoral();
document.body.appendChild(toggleCoralButton);

Let's hook this condition up in the run()function. When useCoralDelegate is false, it should run the CPU version. Otherwise, it runs the Coral version (but for now, it will just do nothing). Wrap the code from running the CPU model in an if statement. Note that the expanded tensor is left out of the if statement because the Coral model uses it.

CODELAB part 2: Check whether to use the delegate here.

// NOTE: Don't just copy-paste this code into the app.
// You'll need to edit the code from the CPU section.
const expanded = tf.expandDims(image, 0);
if (useCoralDelegate) {
  // CODELAB part 2: Run Coral prediction here.
} else {
  const divided = tf.div(expanded, tf.scalar(127));
  const normalized = tf.sub(divided, tf.scalar(1));
  stats.begin();
  const prediction = model.predict(normalized);
  stats.end();
  const percentage = tf.mul(prediction, tf.scalar(100));
  showPrediction(percentage.dataSync(), labels);
}

Run the model

The Coral version of the model expects uint8 tensors from 0 to 255, so its input does not need to be normalized. However, the output is also a uint8 tensor in the range of 0 to 255. It needs to be converted to a float from 0 to 100 before it's displayed.

CODELAB part 2: Run Coral prediction here. (This is part of the code snippet above)

stats.begin();
const prediction = coralModel.predict(expanded);
stats.end();
const percentage = tf.div(tf.mul(prediction, tf.scalar(100)), tf.scalar(255));
showPrediction(percentage.dataSync(), labels);

Run the app again with yarn start, and it should show classifications from the Coral accelerator.

The CPU and Coral models run in the app one at a time, and a button switches between them. The CPU model gets around 20 FPS and the Coral model gets around 45.

You can switch between Coral and CPU inference by pressing the button. You might notice that the Coral model's confidence rankings are less precise than the CPU model's, and they usually end with an even decimal place. This loss in precision is a tradeoff of running a quantized model on Coral. It usually doesn't matter in practice, but it's something to keep in mind.

A note on performance

The frame rate you see includes preprocessing and postprocessing, so it's not representative of what Coral hardware is capable of. You can get a better idea of the performance by clicking on the FPS meter until it shows latency (in milliseconds), which measures just the call to model.predict. However, that still includes the time it takes to move Tensors to the TFLite native C bindings and then to the Coral device, so it's not a perfect measurement. For more accurate performance benchmarks written in C++, see the EdgeTPU benchmark page.

Also of note is that the video was recorded on a laptop instead of a Raspberry Pi, so you might see a different FPS.

Speeding up Coral preprocessing

In some cases, you can speed up the preprocessing by switching TFJS backends. The default backend is WebGL, which is good for large, parallelizable operations, but this app doesn't do much of that in the preprocessing phase (the only op it uses is expandDims, which is not parallel). You can switch to the CPU backend to avoid the extra latency of moving tensors to and from the GPU by adding this line after the imports at the top of the file.

tf.setBackend(cpu');

This also affects the preprocessing for the TFLite CPU model, which is parallelized, so that model runs a lot slower with this change.

6. Accelerate the CPU model with WebNN

If you don't have a Coral accelerator, or if you just want to try out another way to speed up the model, you can use the WebNN TFLite delegate. This delegate uses machine learning hardware built into Intel processors to accelerate model inference with the OpenVINO toolkit. Consequently, it has additional requirements that weren't covered in the setup section of this codelab, and you will need to install the OpenVINO toolkit. Be sure to check your setup against the supported Target System Platforms before proceeding, but note that the WebNN delegate does not yet support macOS.

Install the OpenVINO toolkit

The OpenVINO toolkit uses machine learning hardware built into Intel processors to accelerate models. You can download a precompiled version from Intel or build it from source. There are several ways to install OpenVINO, but for the purposes of this Codelab, we recommend that you use the installer script for Windows or Linux. Be sure to install the 2021.4.2 LTS runtime version, as other versions may not be compatible. After you run the installer, make sure you configure the environment variables of your shell as described in the installation instructions for Linux or Windows ( permanent solution), or by running the setupvars.sh (Linux) or setupvars.bat (Windows) command located in the webnn-tflite-delegate directory.

Verify the WebNN Delegate is working

To verify that the WebNN delegate is working correctly, run the integration tests for the webnn-tflite-delegate package found in the root directory of the repository. To run the integration tests, run these commands in the package's directory:

# In webnn-tflite-delegate/
npx yarn
npx yarn test-integration

You should see an output like this:

WebNN delegate: WebNN device set to 0.
INFO: Created TensorFlow Lite WebNN delegate for device Default and power Default.

============================
Hi there 👋. Looks like you are running TensorFlow.js in Node.js. To speed things up dramatically, install our node backend, which binds to TensorFlow C++, by running npm i @tensorflow/tfjs-node, or npm i @tensorflow/tfjs-node-gpu if you have CUDA. Then call require('@tensorflow/tfjs-node'); (-gpu suffix for CUDA) at the start of your program. Visit https://github.com/tensorflow/tfjs-node for more details.
============================
label: wine bottle
score:  0.934505045413971
.


1 spec, 0 failures
Finished in 0.446 seconds
Randomized with seed 58441 (jasmine --random=true --seed=58441)
Done in 8.07s.

If you see an output like this, it indicates a configuration error:

Platform node has already been set. Overwriting the platform with node.
Randomized with seed 05938
Started
error Command failed with exit code 3221225477.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.

This output most likely means you haven't set OpenVINO's environment variables. For now, you can set them by running the setupvars.sh (Linux) or setupvars.bat (Windows) command, but you may want to set them permanently by following the Linux or Windows ( permanent solution) instructions. If you're using Windows, the

setupvars.bat

command does not support Git bash, so make sure that you run it and other commands from this codelab from the Windows command prompt.

Install the WebNN Delegate

With OpenVINO installed, you're now ready to accelerate the CPU model with WebNN. This section of the codelab builds off the code you wrote in the "Run the CPU Model in Your App" section. You can use the code you wrote in this step, but if you've already completed the Coral section, use the cpu_inference_working checkpoint instead so that you start with a clean slate.

The Node.js part of the WebNN delegate is distributed on npmjs. To install it, run this command:

npm install --save webnn-tflite-delegate

Then, import the delegate by adding this line to the top of the renderer.js file:

CODELAB part 2: Import the delegate here.

const {WebNNDelegate, WebNNDevice} = require('webnn-tflite-delegate');

The WebNN delegate supports running on the CPU or the GPU; WebNNDevice lets you choose which to use.

Load the model

Now you're ready to load the model with the WebNN delegate enabled. For Coral, you had to load a different model file, but WebNN supports the same model format as TFLite. Add the WebNNDelegate to the list of delegates passed to the model to enable it:

CODELAB part 2: Load the delegate model here.

let webnnModel = await loadTFLiteModel(modelPath, {
  delegates: [new WebNNDelegate({webnnDevice: WebNNDevice.DEFAULT})],
});

You don't need to load the labels again because this is the same model.

Add a button to switch between TfLite CPU and WebNN

Now that the WebNN version of the model is ready, add a button to switch between WebNN and TfLite CPU inference. Running them both at the same time makes it hard to see performance differences.

Add the button with this code (note that it won't actually switch models yet):

CODELAB part 2: Create the delegate button here.

let useWebNNDelegate = false;
const divElem = document.createElement('div');
const toggleWebNNButton = document.createElement('button');
function toggleWebNN() {
  useWebNNDelegate = !useWebNNDelegate;
  toggleWebNNButton.innerHTML = useWebNNDelegate
      ? 'Using WebNN. Press to switch to TFLite CPU.'
      : 'Using TFLite CPU. Press to switch to WebNN.';
  divElem.hidden = useWebNNDelegate ? false : true;
}

toggleWebNNButton.addEventListener('click', toggleWebNN);
toggleWebNN();
document.body.appendChild(toggleWebNNButton);
document.body.appendChild(divElem);

This code also adds a div element that you use to configure WebNN settings in the next section.

Add a dropdown to switch between WebNN devices

WebNN supports running on CPU and GPU, so add a dropdown to switch between them. Add this code after the code that creates the button:

// Create elements for WebNN device selection
divElem.innerHTML = '<br/>WebNN Device: ';
const selectElem = document.createElement('select');
divElem.appendChild(selectElem);

const webnnDevices = ['Default', 'GPU', 'CPU'];
// append the options
for (let i = 0; i < webnnDevices.length; i++) {
  var optionElem = document.createElement('option');
  optionElem.value = i;
  optionElem.text = webnnDevices[i];
  selectElem.appendChild(optionElem);
}

Now, if you run the app, you see a dropdown listing Default, GPU, and CPU. Choosing one of them won't do anything right now since the dropdown hasn't been hooked up yet. The app shows a dropdown where the WebNN device can be selected from Default, GPU, or CPU.

Make the dropdown change the device

To hook up the dropdown so it changes which WebNN device is used, add a listener to the change event of the dropdown selector element. When the selected value changes, recreate the WebNN model with the corresponding WebNN device selected in the delegate options.

Add the following code after the code that added the dropdown:

selectElem.addEventListener('change', async () => {
  let webnnDevice;
  switch(selectElem.value) {
    case '1':
      webnnDevice = WebNNDevice.GPU;
      break;
    case '2':
      webnnDevice = WebNNDevice.CPU;
      break;
    default:
      webnnDevice = WebNNDevice.DEFAULT;
      break;
  }
  webnnModel = await loadTFLiteModel(modelPath, {
    delegates: [new WebNNDelegate({webnnDevice})],
  });
});

With this change, the dropdown creates a new model with the correct settings every time it's changed. Now it's time to hook up the WebNN model and use it for inference.

Run the WebNN model

The WebNN model is ready to be used, but the button to switch between WebNN and TfLite CPU doesn't actually switch the model yet. To switch the model, you first need to rename the model variable from when you loaded the TfLite CPU model in the first section of the codelab.

Change the following line...

const model = await loadTFLiteModel(modelPath);

...so that it matches this line.

const cpuModel = await loadTFLiteModel(modelPath);

With the model variable renamed to cpuModel, add this to the run function to choose the correct model based on the state of the button:

CODELAB part 2: Check whether to use the delegate here.

let model;
if (useWebNNDelegate) {
  model = webnnModel;
} else {
  model = cpuModel;
}

Now, when you run the app, the button switches between TfLite CPU and WebNN. The TFLite CPU model and the WebNN CPU and GPU models run in the app. When one of the WebNN models is active, a dropdown menu switches between them. The CPU model gets approximately 15 FPS and the WebNN CPU model gets approximately 40.

You can also switch between WebNN CPU and GPU inference if you have an integrated Intel GPU.

A note on performance

The frame rate you see includes preprocessing and postprocessing, so it's not representative of what WebNN is capable of. You can get a better idea of the performance by clicking on the FPS meter until it shows latency (in milliseconds), which measures just the call to model.predict. However, that still includes the time it takes to move Tensors to the TFLite native C bindings, so it's not a perfect measurement.

7. Congratulations

Congratulations! You have just completed your very first Coral / WebNN project using tfjs-tflite-node in Electron.

Try it out, and test it on a variety of images. You can also train a new model on TeachableMachine to classify something completely different.

Recap

In this codelab, you learned:

  • How to install and set up the tfjs-tflite-node npm package to run TFLite models in Node.js.
  • How to install the Edge TPU runtime library to run models on a Coral device.
  • How to accelerate model inference using a Coral edge TPU.
  • How to accelerate model inference with WebNN.

What's next?

Now that you have a working base to start from, what creative ideas can you come up with to extend this machine learning model runner to a real world use case you may be working on? Maybe you could revolutionize the industry you work in with fast and affordable inference, or maybe you could modify a toaster so it stops toasting when the bread looks just right. The possibilities are endless.

To go further and learn more about how TeachableMachine trained the model you used, check out our codelab on Transfer Learning. If you're looking for other models that work with Coral, such as speech recognition and pose estimation, take a look at coral.ai/models. You can also find CPU versions of those models and many others on TensorFlow Hub.

Share what you make with us

You can easily extend what you made today for other creative use cases too and we encourage you to think outside the box and keep hacking.

Remember to tag us on social media using the #MadeWithTFJS hashtag for a chance for your project to be featured on our TensorFlow blog or even future events. We would love to see what you make.

Websites to check out