ARCore is a platform for building augmented reality apps on Android. Augmented Images gives you the ability to create AR apps that are able to recognize pre-registered images and anchor virtual content on them.

This codelab guides you through modifying an existing ARCore app to incorporate Augmented Images that are moving or fixed in place.

What you will build

In this codelab, you're going to build upon a pre-existing ARCore sample app. By the end of the codelab, your app will:

Is this your first time making an ARCore app?

Yes No

Do you plan to write sample code in this codelab or just want to read these pages?

Write sample code Just read these pages

What you'll learn

What you'll need

Make sure you have everything that you need before starting this codelab:

Now that you've got everything ready, let's start!

Early SDK access for Google I/O

NOTE: If you are following this codelab during the week of Google I/O 2019, you will mostly likely need to manually side-load ARCore 1.9.0 onto your development device. This prevents your app from prompting you to update the installed version of ARCore even though it might not yet be available on your device through the Google Play Store.

To side-load the ARCore APK:

  1. Verify that you are using an ARCore Supported Device.
  2. Download ARCore_1.9.0.apk from the ARCore SDK for Android releases page.
  3. Run adb install -r ARCore_1.9.0.apk to install ARCore onto your device.

We'll start by downloading the ARCore Sceneform SDK from GitHub sceneform-android-sdk-v1.9.0.zip. Unzip it to your preferred location. The extract folder will be called sceneform-android-sdk-1.9.0.

Launch Android Studio, and click Open an existing Android Studio project.

Navigate to this unzipped folder:

sceneform-android-sdk-1.9.0/samples/augmentedimage/

Click Open.

Wait for Android studio to finish syncing the project. If your Android Studio doesn't have the required components, it may fail with message "Install missing platform and sync project". Follow the instruction to fix the problem.

Now that you have a working ARCore app project, let's give it a test run.

Connect your ARCore device to the development machine, and use menu Run > Run ‘app' to run the debug version on the device. In the dialog prompting you to choose which device to run from,

choose the connected device, and click OK.

This sample project uses targetSdkVersion 28. If you have a build error such as Failed to find Build Tools revision 28.0.3, follow the instructions described in Android Studio to download and install the required Android Build Tools version.

If everything is successful, the sample app launch launches on the device and prompts you for permission to allow Augmented Image to take pictures and videos. Tap ALLOW to grant permission.

Let's give our sample app an image to look at.

Back in Android Studio, in the Project window, navigate to app > assets, and double-click the file default.jpg to open it.

Point your device camera at the image of the Earth on screen, and follow the instructions to fit the image you're scanning into the crosshairs.

An image frame will overlay on top of the image, like this:

Next, we'll make small improvements to the sample app.

There is an area we can easily improve in this sample project: the image looks out of focus.

Enable auto-focus in your ARCore session

The reason the image is out of focus is that ARCore camera session has the focus fixed at one meter by default. It's easy to remedy this by configuring the ARCore session to make the camera auto-focused. To do so, use the Config.FocusMode.AUTO parameter in the session configuration.

In AugmentedImageFragment.java, add a new line to the getSessionConfiguration function.

AugmentedImageFragment.java

@Override
protected Config getSessionConfiguration(Session session) {
  Config config = super.getSessionConfiguration(session);

  // Use setFocusMode to configure auto-focus.
  config.setFocusMode(Config.FocusMode.AUTO);


  if (!setupAugmentedImageDatabase(config, session)) {
    SnackbarHelper.getInstance()
        .showError(getActivity(), "Could not setup augmented image database");
  }
  return config;
}

Now, let's give it a test run. This time we should be able to see the image clearly focused in preview window. Like this:

As we mentioned at the beginning of this codelab, we're going to have a little maze game on the image. First, let's find a maze model on poly.google.com, which contains many 3D models under the CC-BY license for free use.

For this codelab we're going to use "Circle Maze - Green," by Evol, and licensed under CC-BY 3.0.

Follow these steps to download the model and get it into Android Studio:

  1. Navigate to the Poly page for the model.
  2. Click Download, and select OBJ File.

This downloads a file called green-maze.zip.

  1. Unzip green-maze.zip and copy the content to this location:
    sceneform-android-sdk-1.9.0/samples/augmentedimage/app/sampledata/green-maze/
  1. In Android Studio, navigate to app > sampledata > green-maze.

There should be two files in this folder: GreenMaze.obj and GreenMaze.mtl.

Next, we'll import this OBJ file into Android Studio using the Sceneform plugin for Android Studio.

Add the Sceneform plugin for Android Studio

If the Sceneform plugin isn't already installed on your Android Studio, it's time to do it.

Go to Android Studio preferences (File > Settings on Windows or Android Studio > Preferences on macOS). Click Plugins, and browse the repositories for Google Sceneform Tools (Beta)

Install the plugin.

Import the maze model

We'll use the Sceneform plugin to add the conversion tasks to the build.gradle file and preview the models.

In the Project window:

  1. Go to app > sampledata > green-maze > GreenMaze.obj.
  2. Right-mouse click GreenMaze.obj, and choose New > Sceneform Asset.

This opens the Import Wizard dialog with everything initialized to default values.

Leave the default values, and click Finish to start importing the model. If you see a warning about the build.gradle and SFB files being read-only, you can ignore it.

When the import completes, it will open the .sfb file in the editor, which is actually the .sfa file content. The Sceneform viewer is also opened, showing the imported model:

You can change the appearance by tuning the values in the .sfa file. We're going to change the settings so that metallic is set to 0 and roughness is set to 0.4. Make these changes in the GreenMaze.sfa file, then save.

GreenMaze.sfa

Metallic: 0
roughness: 0.4

Android Studio builds a new .sfb file and refreshes the Viewer panel. Here's what the maze looks like with these new settings.

metallic: 0, roughness:0.4

Display the maze model

Now that we have a new 3D model, GreenMaze.sfb, let's display it on top of our image.

  1. In AugmentedImageNode.java, add a member variable called mazeNode to hold the maze model. Because the maze is controlled by the image, it makes sense to put the mazeNode inside the AugmentedImageNode class.
  2. Add a variable called mazeRenderable to help loading GreenMaze.sfb.
  3. In the AugmentedImageNode constructor, load GreenMaze.sfb into mazeRenderable.
  4. In the setImage function, check if mazeRenderable has completed loading.
  5. In the setImage function, initialize mazeNode, and set its parents and the renderable.

In AugmentedImageNode.java, make these changes.

  // Add a member variable to hold the maze model. 
  private Node mazeNode;

  // Add a variable called mazeRenderable for use with loading 
  // GreenMaze.sfb.
  private CompletableFuture<ModelRenderable> mazeRenderable;

  // Replace the definition of the AugmentedImageNode function with the
  // following code, which loads GreenMaze.sfb into mazeRenderable.
  public AugmentedImageNode(Context context) {
    mazeRenderable =
          ModelRenderable.builder()
              .setSource(context, Uri.parse("GreenMaze.sfb"))
              .build();
  }

  // Replace the definition of the setImage function with the following
  // code, which checks if mazeRenderable has completed loading.

  public void setImage(AugmentedImage image) {
    this.image = image;

    // Initialize mazeNode and set its parents and the Renderable. 
    // If any of the models are not loaded, process this function 
    // until they all are loaded.
    if (!mazeRenderable.isDone()) {
      CompletableFuture.allOf(mazeRenderable)
          .thenAccept((Void aVoid) -> setImage(image))
          .exceptionally(
              throwable -> {
                Log.e(TAG, "Exception loading", throwable);
                return null;
              });
      return;
    }

    // Set the anchor based on the center of the image.
    setAnchor(image.createAnchor(image.getCenterPose()));

    mazeNode = new Node();
    mazeNode.setParent(this);
    mazeNode.setRenderable(mazeRenderable.getNow(null));
  }

Ok, it seems we just had enough code change to display the maze on top of our default.jpg picture of the Earth. Before we run it, we still need to adjust the size of the maze model.

How big we would like to the maze to be? For this codelab, let's say we want the maze to be as big as the image. AugmentedImage allows an ARCore-supported device to evaluate the size of an image. We can get the evaluated image size from the getExtentX() and getExtentZ() functions.

Because the Sceneform plugin automatically resizes the model during import, we need to change the default scale in GreenMaze.sfa. Let's change the scale to 1 so the size of the maze is exactly as written in its OBJ file.

In GreenMaze.sfa, make this change.

Scale: 1,

Next we need to know the original size of the maze model. One way to find out this is opening GreenMaze.obj in a text editor, get all the lines that begin with v, <x>, <y>, <z>, and get the maximum and minimum values of each component. We are not going to do that in this codelab, I'll simply give out the value here. The dimension of the maze model is 492.65x120x492.65. So we need to set the scale of maze mode to image_size / 492.65.

To do this, let's add a new member variable called maze_scale to AugmentedImageNode.java and use it to store the scale of the maze. We can assign its value in the setImage function.

Also, because the maze wall is still a bit too high for our codelab, let's scale it additional 0.1 times. This lowers the wall so that the ball is visible when it's touching the bottom of the maze.

In AugmentedImageNode.java, make these code changes.

private float maze_scale = 0.0f;

    ...

public void setImage(AugmentedImage image) {
    // At the end of this function, add code for scaling the maze Node.

    ...

    // Make sure the longest edge fits inside the image.
    final float maze_edge_size = 492.65f;
    final float max_image_edge = Math.max(image.getExtentX(), image.getExtentZ());
    maze_scale = max_image_edge / maze_edge_size;

    // Scale Y an extra 10 times to lower the maze wall.
    mazeNode.setLocalScale(new Vector3(maze_scale, maze_scale * 0.1f, maze_scale));

    ...
}

Ok, let's try running on your ARCore-supported device. Now the maze size should be the same as the image size.

Now let's add a ball that rolls around inside the maze. In Sceneform, it's quite easy to do this: we'll use ShapeFactory to create a colored ball in default material.

As a starter, let's add a ball of 0.01 meter radius at a fixed distance (0.1 meter) on top of the image, just for illustrative purposes.

Add this code in AugmentedImageNode.java.

// Add these lines at the top with the rest of the imports.
import com.google.ar.sceneform.rendering.Color;
import com.google.ar.sceneform.rendering.MaterialFactory;
import com.google.ar.sceneform.rendering.ShapeFactory;

  // Add a ModelRenderable called ballRenderable.
  private ModelRenderable ballRenderable;

  // In the AugmentedImageNode function, you're going to add some code
  // at the end of the function. (See below.)
  public AugmentedImageNode(Context context) {

  ...

    // Add this code to the end of this function.
    MaterialFactory.makeOpaqueWithColor(context, new Color(android.graphics.Color.RED))
        .thenAccept(
            material -> {
              ballRenderable =
                  ShapeFactory.makeSphere(0.01f, new Vector3(0, 0, 0), material); });
  }

  // At the end of the setImage function, you're going to add some code
  // to add the ball. (See below.)
  public void setImage(AugmentedImage image) {

    ...

    // Add the ball at the end of the setImage function.
    Node ballNode = new Node();
    ballNode.setParent(this);
    ballNode.setRenderable(ballRenderable);
    ballNode.setLocalPosition(new Vector3(0, 0.1f, 0));

    ...
  }

Then let's try running it on device. We should see something like this.

This codelab is about moving Augmented Images, so ideally we should use something that's easily movable into multiple fixed positions, such as an adjustable monitor arm, an additional mobile phone, or an object with a hard surface and texture with a usable image on it such as a magazine or product packaging.

Note: The maze is a bit small when displayed on a mobile phone screen, so feel free to adjust the value of maze_scale (introduced in the "Show the maze model on image" section) to make it bigger.

In this codelab, we'll pick an image from the device storage as the target image. (Once again this section is optional, so it's okay to skip to the next section if you don't have something close at hand to work with.)

Now if you do have something handy, with a texture that's unique enough to be recognized, we can take a photo of it and use that as the target image.

Your photo will need to have the same or similar quality as a scan, so be aware of these tips when taking the photo:

  1. Put the texture on a flat surface.
  2. Try to be directly above the object that has the image on it, so that the image has a rectangular or square shape.
  3. Try to avoid glare from lights or the sun.

If it helps, there's a free app PhotoScan to help you take a good scanning of a magazine.

For example, let's say I want to use this beautiful notebook as a target image.

I might use PhotoScan to take a high-quality scan of the front. The result I got is this photo.

Let's keep this photo in the device. And in our codelab app, before we start anything, we'll prompt the user to choose an image.

In AugmentedImageFragment.java, make these changes.

  // Add a Uri that stores the path of the target image chosen from
  // device storage.
  private android.net.Uri chosenImageUri = null;
  private static final int REQUEST_CODE_CHOOSE_IMAGE = 1;

  // Replace USE_SINGLE_IMAGE with this value.
  private static final boolean USE_SINGLE_IMAGE = true;

  // At the end of the AugmentedImageFragment.onAttach function, call 
  // chooseNewImage. 
  public void onAttach(Context context) {

    ...

    chooseNewImage();
  }

   ...
 
  // Replace the loadAugmentedImageBitmap function with this new code
  // that attempts to use the image chosen by the user. 
  private Bitmap loadAugmentedImageBitmap(AssetManager assetManager) {
    if (chosenImageUri == null) {
      try (InputStream is = assetManager.open(DEFAULT_IMAGE_NAME)) {
        return BitmapFactory.decodeStream(is);
      } catch (IOException e) {
        Log.e(TAG, "IO exception loading augmented image bitmap.", e);
      }
    } else {
      try (InputStream is = getContext().getContentResolver().openInputStream(chosenImageUri)) {
        return BitmapFactory.decodeStream(is);
      } catch (IOException e) {
        Log.e(TAG, "IO exception loading augmented image bitmap from storage.", e);
      }
    }
    return null;
  }

  // Add a new function that prompts the user to choose an image from
  // device storage.
  void chooseNewImage() {
    android.content.Intent intent = new android.content.Intent(android.content.Intent.ACTION_GET_CONTENT);
    intent.addCategory(android.content.Intent.CATEGORY_OPENABLE);
    intent.setType("image/*");
    startActivityForResult(
        android.content.Intent.createChooser(intent, "Select target augmented image"),
        REQUEST_CODE_CHOOSE_IMAGE);
  }

  // Add a new onActivityResult function to handle the user-selected
  // image, and to reconfigure the ARCore session in the internal
  // ArSceneView.
  @Override
  public void onActivityResult(int requestCode, int resultCode, android.content.Intent data) {
    super.onActivityResult(requestCode, resultCode, data);
    try {
      if (resultCode == android.app.Activity.RESULT_OK) {
        if (requestCode == REQUEST_CODE_CHOOSE_IMAGE) {
          // Get the Uri of target image
          chosenImageUri = data.getData();

          // Reconfig ARCore session to use the new image
          Session arcoreSession = getArSceneView().getSession();
          Config config = getSessionConfiguration(arcoreSession);
          config.setUpdateMode(Config.UpdateMode.LATEST_CAMERA_IMAGE);
          arcoreSession.configure(config);
        }
      }
    } catch (Exception e) {
      Log.e(TAG, "onActivityResult - target image selection error ", e);
    }
  }

Now, assuming we already have a good target image to use in our camera storage (which maybe you took with PhotoScan), let's run the app on the device to give it a test.

At startup, the app shows a list of images to select.

Navigate to the target image. Tap to select it. Depends on your Android version and variation this UI may look different.

The app starts in AR mode showing the instructions to fit the image you're scanning into the crosshairs.

We can point the camera at the actual object to detect it. If the app properly recognizes the image, the maze should appear on the image.

After the maze appears on the image, move around the object and note how the maze moves with it.

Determine target image quality

To recognize an image, ARCore relies on visual features in the image. Not all images have the same quality and can be recognized easily.

The arcoreimg tool in the ARCore Android SDK lets you verify the quality of a target image. We can run this command line tool to determine how recognizable an image will be to ARCore. This tool outputs a number between 0 to 100, with 100 being the easiest to recognize. Here's an example:

arcore-android-sdk-1.9.0/tools/arcoreimg/macos$
$ ./arcoreimg  eval-img --input_image_path=/Users/username/maze.jpg
100

The last section is not really relevant to ARCore or Sceneform, but it's additional part that makes this example app fun. It is totally fine if you skip this part.

We'll use an open source Physics engine, jBullet, to handle physics simulation.

Herew's what we're going to do:

  1. Add GreenMaze.obj to project assets directory so we can load it at runtime.
  2. Created PhysicsController class to manage all physics related functions. Internally, it uses JBullet physics engine.
  3. Call PhysicsController when an image was recognized, and updatePhysics
  4. Use real world gravity to move the ball in maze. Note, we have to scale the size of the ball a little bit, so it can pass through gaps in the maze.

Download the PhysicsController.java code and add it to your project in this directory ./sceneform-android-sdk-1.9.0/samples/augmentedimage/app/src/main/java/com/google/ar/sceneform/samples/augmentedimage/

Then make those changes in existing java code. As below,

In Android Studio, copy GreenMaze.obj from

app > sampledata > green-maze

to:

app > assets

Your project directory should look like this.

In app/build.gradle, add this code.

    // Add these dependencies.
    implementation 'cz.advel.jbullet:jbullet:20101010-1'

    // Obj - a simple Wavefront OBJ file loader
    // https://github.com/javagl/Obj
    implementation 'de.javagl:obj:0.2.1'

In AugmentedImageNode.java, add this code.

// Add these lines at the top with the rest of the imports.
import com.google.ar.core.Pose;
import com.google.ar.sceneform.math.Quaternion;

  // Change ballNode to a member variable.
  private Node ballNode;

  // At the end of the setImage function, update some code art.
  public void setImage(AugmentedImage image) {

    ...
    
    // Add the ball, but this time ballNode is a member variable
    ballNode = new Node();
    ballNode.setParent(this);
    ballNode.setRenderable(ballRenderable);
    ballNode.setLocalPosition(new Vector3(0, 0.1f, 0)); // start position for debugging


    // add below code at the end of this function
    // Add the expected maze mesh size of ball. In, which is 13 diagram, 6.5 radius (in original mesh
    // vertices) is.
    // when mesh is scaled down to max_image_edge, radius will be scaled down to 6.5 * scale.
    // The sphere is already 0.01, so we need to scale the ball ball_scale = 6.5 * scale / 0.01f
    ballNode.setLocalScale(new Vector3(
        6.5f * maze_scale / 0.01f,
        6.5f * maze_scale / 0.01f,
        6.5f * maze_scale / 0.01f));
  }

  public void updateBallPose(Pose pose) {
    if (ballNode == null)
      return;

    ballNode.setLocalPosition(new Vector3(pose.tx() * maze_scale, pose.ty()* maze_scale, pose.tz()* maze_scale));
    ballNode.setLocalRotation(new Quaternion(pose.qx(), pose.qy(), pose.qz(), pose.qw()));
  }

In AugmentedImageActivity.java, add this code.

// Add these lines at the top with the rest of the imports.
import com.google.ar.sceneform.math.Vector3;
import com.google.ar.core.Pose;

  // Declare the PhysicsController class. 
  private PhysicsController physicsController;

  // Add a TRACKING case to the onUpdateFrame method.
  private void onUpdateFrame(FrameTime frameTime) {
    
    ...

    for (AugmentedImage augmentedImage : updatedAugmentedImages) {
      switch (augmentedImage.getTrackingState()) {
        
        ...

        case TRACKING:
          // Have to switch to UI Thread to update View.
          fitToScanView.setVisibility(View.GONE);

          // Create a new anchor for newly found images.
          if (!augmentedImageMap.containsKey(augmentedImage)) {
            AugmentedImageNode node = new AugmentedImageNode(this);
            node.setImage(augmentedImage);
            augmentedImageMap.put(augmentedImage, node);
            arFragment.getArSceneView().getScene().addChild(node);

            physicsController = new PhysicsController(this);


          } else {
            // If the image anchor is already created
            AugmentedImageNode node = augmentedImageMap.get(augmentedImage);
            node.updateBallPose(physicsController.getBallPose());

            // Use real world gravity, (0, -10, 0) as gravity
            // Convert to Physics world coordinate (because Maze mesh has to be static)
            // Use it as a force to move the ball
            Pose worldGravityPose = Pose.makeTranslation(0, -10f, 0);
            Pose mazeGravityPose = augmentedImage.getCenterPose().inverse().compose(worldGravityPose);
            float mazeGravity[] = mazeGravityPose.getTranslation();
            physicsController.applyGravityToBall(mazeGravity);

            physicsController.updatePhysics();
          }
          break;

        case STOPPED:
          AugmentedImageNode node = augmentedImageMap.get(augmentedImage);
          augmentedImageMap.remove(augmentedImage);
          arFragment.getArSceneView().getScene().removeChild(node);
          break;
    }
  }

Then we can get it moving like this.

If you don't have anything other than a monitor on screen to show the target image, here's a trick to make the app playful for you: use the camera direction for gravity. In order to do so, we simply need to change original gravity direction.

In AugmentedImageActivity.java, add this code.

  // Make these changes in onUpdateFrame.
  private void onUpdateFrame(FrameTime frameTime) {
            
            ...

            // Replace this line to the code below
            // Pose worldGravityPose = Pose.makeTranslation(0, -10f, 0);

            // Fun experiment, use camera direction as gravity
            float cameraZDir[] = frame.getCamera().getPose().getZAxis();
            Vector3 cameraZVector = new Vector3(cameraZDir[0], cameraZDir[1], cameraZDir[2]);
            Vector3 cameraGravity = cameraZVector.negated().scaled(10);
            Pose worldGravityPose = Pose.makeTranslation(
                cameraGravity.x, cameraGravity.y, cameraGravity.z);
            // ...
  }

Now our ARCore-supported device behaves like a leaf blower machine, and we can use it to blow the ball out of the maze.

Have fun!

Congratulations, you have reached the end of this codelab. Let's look back at what we have achieved in this codelab.

If you would like to refer to the complete code, you can download it here.

Did you have fun in doing this codelab?

Yes No

Did you learn anything useful in doing this codelab?

Yes No

Did you complete making the app in this codelab?

Yes No

Do you plan to making an ARCore app in the next 6 months?

Yes Maybe No