ARCore Augmented Images

ARCore is a platform for building augmented reality apps on Android. Augmented Images gives you the ability to create AR apps that are able to recognize pre-registered images and anchor virtual content on them.

This codelab guides you through modifying an existing ARCore sample app to incorporate Augmented Images that are moving or fixed in place.

What you will build

In this codelab, you're going to build upon a pre-existing ARCore sample app. By the end of the codelab, your app will:

  • Be able to detect an image target and attach a virtual maze on the target. Example visualization below
  • Be able to track the moving target as long as it is in the view

6bc6605df89de525.gif

Is this your first time making an ARCore app?

Yes No

Do you plan to write sample code in this codelab or just want to read these pages?

Write sample code Just read these pages

What you'll learn

  • How to use Augmented Images in ARCore in Java.
  • How to gauge the ability of an image to be recognized by ARCore.
  • How to attach a virtual content on an image and track its movement.

What you'll need

Make sure you have everything that you need before starting this codelab:

  • A supported ARCore device, connected via a USB cable to your development machine.
  • ARCore 1.9 or later. This APK is normally automatically installed on the device via the Play Store. If somehow your device doesn't have the required version of ARCore, you can always installed it from the Play Store
  • A development machine with Android Studio (v3.1 or later).
  • Access to the internet, for downloading libraries during development.

Now that you've got everything ready, let's start!

We'll start by downloading the ARCore Java SDK from GitHub arcore-android-sdk-1.18.1.zip. Unzip it to your preferred location. The extract folder will be called arcore-android-sdk-1.18.1.

Launch Android Studio, and click Open an existing Android Studio project.

5fbf2b21609187cc.png

Navigate to this unzipped folder:

arcore-android-sdk-1.18.1/samples/augmented_image_java

Click Open.

Wait for Android studio to finish syncing the project. If your Android Studio doesn't have the required components, it may fail with the message "Install missing platform and sync project". Follow the instructions to fix the problem.

Now that you have a working ARCore app project, let's give it a test run.

Connect your ARCore device to the development machine, and use menu Run > Run ‘app' to run the debug version on the device. In the dialog prompting you to choose which device to run from,

choose the connected device, and click OK.

1aa2c6faa7ecdbd0.png

92e4c144a632b4ca.png

This sample project uses targetSdkVersion 28. If you have a build error such as Failed to find Build Tools revision 28.0.3, follow the instructions described in Android Studio to download and install the required Android Build Tools version.

If everything is successful, the sample app launches on the device and prompts you for permission to allow Augmented Image to take pictures and videos. Tap ALLOW to grant permission.

Let's give our sample app an image to look at.

Back in Android Studio, in the Project window, navigate to app > assets, and double-click the file default.jpg to open it.

9b333680e7b9f247.jpeg

Point your device camera at the image of the Earth on screen, and follow the instructions to fit the image you're scanning into the crosshairs.

An image frame will overlay on top of the image, like this:

999e05ed35964f6e.png

Next, we'll make small improvements to the sample app.

As we mentioned at the beginning of this codelab, we're going to have a little maze game on the image. First, let's find a maze model on poly.google.com, which contains many 3D models under the CC-BY license for free use.

For this codelab we're going to use "Circle Maze - Green," by Evol, and licensed under CC-BY 3.0.

832fc0f1b09fea1e.png

Follow these steps to download the model and get it into Android Studio:

  1. Navigate to the Poly page for the model.
  2. Click Download, and select OBJ File.

This downloads a file called green-maze.zip.

  1. Unzip green-maze.zip and copy the content to this location: arcore-android-sdk-1.18.1/samples/augmented_image_java/app/src/main/assets/models/green-maze
  2. In Android Studio, navigate to app > assets > models > green-maze.

There should be two files in this folder: GreenMaze.obj and GreenMaze.mtl.

a1f33a2d2d407e03.png

Next, we'll load this OBJ file and display it over the detected image.

Now that we have the 3D model, GreenMaze.obj, let's display it on top of our image.

  1. In AugmentedImageRenderer.java, add a member variable called mazeRenderer to render the maze model. Because the maze should attach to the image, it makes sense to put the mazeRenderer inside the AugmentedImageRenderer class.
  2. In the createOnGlThread function, load the GreenMaze.obj. For simplicity, we'll use the same frame texture as its texture.
  3. In the draw function, adjust the size of maze to the size of the detected image, and draw it.

In AugmentedImageRenderer.java, make these changes.

  // Add a member variable to hold the maze model. 
  private final ObjectRenderer mazeRenderer = new ObjectRenderer();

  // Replace the definition of the createOnGlThread function with the
  // following code, which loads GreenMaze.obj.
  public void createOnGlThread(Context context) throws IOException {

    mazeRenderer.createOnGlThread(
        context, "models/green-maze/GreenMaze.obj", "models/frame_base.png");
    mazeRenderer.setMaterialProperties(0.0f, 3.5f, 1.0f, 6.0f);

  }

  // Replace the definition of the draw function with the
  // following code
  public void draw(
      float[] viewMatrix,
      float[] projectionMatrix,
      AugmentedImage augmentedImage,
      Anchor centerAnchor,
      float[] colorCorrectionRgba) {
    float[] tintColor =
        convertHexToColor(TINT_COLORS_HEX[augmentedImage.getIndex() % TINT_COLORS_HEX.length]);

    final float maze_edge_size = 492.65f; // Magic number of maze size
    final float max_image_edge = Math.max(augmentedImage.getExtentX(), augmentedImage.getExtentZ()); // Get largest detected image edge size

    Pose anchorPose = centerAnchor.getPose();

    float mazsScaleFactor = max_image_edge / maze_edge_size; // scale to set Maze to image size
    float[] modelMatrix = new float[16];

    // OpenGL Matrix operation is in the order: Scale, rotation and Translation
    // So the manual adjustment is after scale
    // The 251.3f and 129.0f is magic number from the maze obj file
    // We need to do this adjustment because the maze obj file
    // is not centered around origin. Normally when you
    // work with your own model, you don't have this problem.
    Pose mozeModelLocalOffset = Pose.makeTranslation(
                                -251.3f * mazsScaleFactor,
                                0.0f,
                                129.0f * mazsScaleFactor);
    anchorPose.compose(mozeModelLocalOffset).toMatrix(modelMatrix, 0);
    mazeRenderer.updateModelMatrix(modelMatrix, mazsScaleFactor, mazsScaleFactor/10.0f, mazsScaleFactor);
    mazeRenderer.draw(viewMatrix, projectionMatrix, colorCorrectionRgba, tintColor);
  }

Ok, it seems we just had enough code change to display the maze on top of our default.jpg picture of the Earth.

There are some magic numbers that I used in the above code. Don't be scared, they are there simply because we don't have full control over this 3D model. So I manually parsed the obj file, to understand the center (x, y, z) position of the model and its size. We are not going to do that in this codelab, I'll simply give out the value here. The dimension of the maze model is 492.65 x 120 x 492.65, with the center at (251.3, 60, -129.0). The X, Y, Z coordinates value range of its vertices are [5.02, 497.67], [0, 120], [-375.17, 117.25] respectively. So we need to set the scale of maze mode by image_size / 492.65. As you may already noticed, the maze 3D model is not centered around origin (0, 0, 0), that's why we need to introduce an offset mozeModelLocalOffset manually.

Also, because the maze wall is still a bit too high for our codelab, let's scale it additional 0.1 times. This lowers the wall so that the gaps are more visible. To do so, we need to introduce a helper function that allows us scale X, Y, Z coordinates unevenly.

In augmentedimage/rendering/ObjectRenderer.java, make these changes.

  public void updateModelMatrix(float[] modelMatrix, float scaleFactorX, float scaleFactorY, float scaleFactorZ) {
    float[] scaleMatrix = new float[16];
    Matrix.setIdentityM(scaleMatrix, 0);
    scaleMatrix[0] = scaleFactorX;
    scaleMatrix[5] = scaleFactorY;
    scaleMatrix[10] = scaleFactorZ;
    Matrix.multiplyMM(this.modelMatrix, 0, modelMatrix, 0, scaleMatrix, 0);
  }

Ok, let's try running on your ARCore-supported device. Now the maze size should be the same as the image size.

772cbe2a8baef3ba.png

Now let's add an object that moves around inside the maze. In this codelab, we'll keep it simple and just use the Android figurine andy.obj file that's included in ARCore Android SDK. And use the image frame texture as its texture, because it looks different from the green maze that we render on top of the image.

Add this code in AugmentedImageNode.java.

// Add a private member to render andy
  private final ObjectRenderer andyRenderer = new ObjectRenderer();


  public void createOnGlThread(Context context) throws IOException {


    // Add initialization for andyRenderer at the end of the createOnGlThread function.
    andyRenderer.createOnGlThread(
        context, "models/andy.obj", "models/andy.png");
    andyRenderer.setMaterialProperties(0.0f, 3.5f, 1.0f, 6.0f);
  }

  public void draw(
      float[] viewMatrix,
      float[] projectionMatrix,
      AugmentedImage augmentedImage,
      Anchor centerAnchor,
      float[] colorCorrectionRgba) {


    // In draw() function, at the end add code to display the Andy, standing on top of the maze
    Pose andyModelLocalOffset = Pose.makeTranslation(
        0.0f,
        0.1f,
        0.0f);
    anchorPose.compose(andyModelLocalOffset).toMatrix(modelMatrix, 0);
    andyRenderer.updateModelMatrix(modelMatrix, 0.05f); // 0.05f is a Magic number to scale
    andyRenderer.draw(viewMatrix, projectionMatrix, colorCorrectionRgba, tintColor);

  }

Then let's try running it on device. We should see something like this.

cb1e74569d7ace69.png

Determine target image quality

To recognize an image, ARCore relies on visual features in the image. Not all images have the same quality and can be recognized easily.

The arcoreimg tool in the ARCore Android SDK lets you verify the quality of a target image. We can run this command line tool to determine how recognizable an image will be to ARCore. This tool outputs a number between 0 to 100, with 100 being the easiest to recognize. Here's an example:

arcore-android-sdk-1.18.1/tools/arcoreimg/macos$
$ ./arcoreimg  eval-img --input_image_path=/Users/username/maze.jpg
100

The last section is not really relevant to ARCore, but it's an additional part that makes this example app fun. It is totally fine if you skip this part.

We'll use an open source Physics engine, jBullet, to handle physics simulation.

Herew's what we're going to do:

  1. Add GreenMaze.obj to project assets directory so we can load it at runtime.
  2. Created PhysicsController class to manage all physics related functions. Internally, it uses JBullet physics engine.
  3. Call PhysicsController when an image was recognized, and updatePhysics
  4. Use real world gravity to move the ball in maze. Note, we have to scale the size of the ball a little bit, so it can pass through gaps in the maze.

Download the PhysicsController.java code and add it to your project in this directory arcore-android-sdk-1.18.1/samples/augmented_image_java/app/src/main/java/com/google/ar/core/examples/java/augmentedimage/

Then make those changes in existing java code. As below,

In Android Studio, copy GreenMaze.obj from

app > assets > models > green-maze

to:

app > assets

In app/build.gradle, add this code.

    // Add these dependencies.
    implementation 'cz.advel.jbullet:jbullet:20101010-1'

    // Obj - a simple Wavefront OBJ file loader
    // https://github.com/javagl/Obj
    implementation 'de.javagl:obj:0.2.1'

In AugmentedImageRenderer.java, add this code.

// Add this line at the top with the rest of the imports.
  private Pose andyPose = Pose.IDENTITY;

  public void draw(
      float[] viewMatrix,
      float[] projectionMatrix,
      AugmentedImage augmentedImage,
      Anchor centerAnchor,
      float[] colorCorrectionRgba) {

    // Use these code to replace previous code for rendering the Andy object
    // Adjust andy's rendering position
    // Andy's pose is at Maze's vertex's coordinate
    Pose andyPoseInImageSpace = Pose.makeTranslation(
        andyPose.tx() * mazsScaleFactor,
        andyPose.ty() * mazsScaleFactor,
        andyPose.tz() * mazsScaleFactor);

    anchorPose.compose(andyPoseInImageSpace).toMatrix(modelMatrix, 0);
    andyRenderer.updateModelMatrix(modelMatrix, 0.05f);
    andyRenderer.draw(viewMatrix, projectionMatrix, colorCorrectionRgba, tintColor);
  }

  // Add a new utility function to receive Andy pose updates
  public void updateAndyPose(Pose pose) {
    andyPose = pose;
  }

In AugmentedImageActivity.java, add this code.

import com.google.ar.core.Pose;

  // Declare the PhysicsController class. 
  private PhysicsController physicsController;

  // Update the case clause for TRACKING as below
  private void drawAugmentedImages(
   
    ...

        case TRACKING:
          // Have to switch to UI Thread to update View.
          this.runOnUiThread(
              new Runnable() {
                @Override
                public void run() {
                  fitToScanView.setVisibility(View.GONE);
                }
              });

          // Create a new anchor for newly found images.
          if (!augmentedImageMap.containsKey(augmentedImage.getIndex())) {
            Anchor centerPoseAnchor = augmentedImage.createAnchor(augmentedImage.getCenterPose());
            augmentedImageMap.put(
                augmentedImage.getIndex(), Pair.create(augmentedImage, centerPoseAnchor));

            physicsController = new PhysicsController(this);
          } else {
            Pose ballPose = physicsController.getBallPose();
            augmentedImageRenderer.updateAndyPose(ballPose);


            // Use real world gravity, (0, -10, 0) as gravity
            // Convert to Physics world coordinate (because Maze mesh has to be static)
            // Use it as a force to move the ball
            Pose worldGravityPose = Pose.makeTranslation(0, -10f, 0);
            Pose mazeGravityPose = augmentedImage.getCenterPose().inverse().compose(worldGravityPose);
            float mazeGravity[] = mazeGravityPose.getTranslation();
            physicsController.applyGravityToBall(mazeGravity);

            physicsController.updatePhysics();
          }
          break;

Then we can get it moving like this.

2f0df284705d3704.gif

Have fun!

Congratulations, you have reached the end of this codelab. Let's look back at what we have achieved in this codelab.

  • Built and ran an ARCore AugmentedImage Java sample.
  • Updated the sample to auto-focus on near images, and made the image frame align with image size.
  • Updated the sample to use a user-specified image as a target.
  • Updated the sample to display a maze model on the image, at the proper scale.
  • Utilized the pose of the image to do something fun.

If you would like to refer to the complete code, you can download it here.

Did you have fun in doing this codelab?

Yes No

Did you learn anything useful in doing this codelab?

Yes No

Did you complete making the app in this codelab?

Yes No

Do you plan to making an ARCore app in the next 6 months?

Yes Maybe No