What you'll build

In this codelab we are going to take an existing simple WebGL app and extend it to be a full VR experience.

What you'll learn

What you'll need

This codelab is focused on WebVR. Non-relevant concepts and code blocks are glossed over and are provided for you to simply copy and paste.

Download the Code

Click the following link to download all the code for this codelab:

Download source code

Unpack the downloaded zip file. This will unpack a root folder (webvr-codelab-master), which contains the source for the ThreeJS app that we will be starting from.

Install and verify web server

While you're free to use your own web server, this codelab is designed to work well with the Chrome Web Server. If you don't have that app installed yet, you can install it from the Chrome Web Store.

Install Web Server for Chrome

After installing the Web Server for Chrome app, click on the Apps shortcut on the bookmarks bar:

In the following window, click on the Web Server icon:

You'll see this dialog next, which allows you to configure your local web server:

Click the choose folder button, and select the webvr-codelab-master folder. This will enable you to serve your work in progress via the URL highlighted in the web server dialog (in the Web Server URL(s) section).

Under Options, check the box next to "Automatically show index.html", as shown below:

Then stop and restart the server by sliding the toggle labeled "Web Server: STARTED" to the left and then back to the right.

Now visit your site in your web browser (by clicking on the highlighted Web Server URL) and you should see a page that looks like this:

This app is not yet doing anything particularly interesting - so far, it's just a gently spinning 3D cube.

What exactly does WebVR do?

Virtual Reality, or VR, is a set of technologies that allow someone to feel like they are inside some content. To provide depth perception you need to show a different image to each eye, from slightly different perspectives. You need to rotate the displayed scene in concert with the rotation of the user's head, to avoid giving the user motion sickness. And you need to distort the images presented to each eye, to cancel out the distortion caused by the lenses in the head-mounted display.

The WebVR API provides the tools to achieve each of these.

Stereo vision

The WebVR API will provide matrices that represent the position and orientation of each eye in the real world. These can be used to set the position and orientation of the virtual camera when rendering the scene.

Hardware abstraction

As well as the view matrices mentioned above, there is another matrix for each eye that sets up a perspective projection that will have the correct field of view for the display.

If you are presenting your scene on a head-mounted display, the API also takes care of correcting the lens distortion for you.

Capability information

There are already several different brands and versions of VR hardware, and many more will come in the future. The WebVR API provides a consistent interface to the hardware and lets you query the capabilities of the device so that you can tailor your experience.

How ready is WebVR?

WebVR is still experimental, and the API will change before it is stable. However, the API itself is relatively simple and it shouldn't be too difficult to make changes as things progress. All of the core functionality is available and working in multiple browsers, so now is a great time to get started so that you can ship your awesome WebVR experiences as soon as the API is ready.

Different browsers support different hardware. The latest version of Chrome supports Daydream View with a compatible phone. In the future we expect Chrome to support whatever VR hardware the underlying operating system supports.

There are two main ways that we can test our VR experience. If you are using Chrome desktop you can install a DevTools extension that simulates having a VR headset plugged in.

Install WebVR API Emulation

With this extension installed you will get a new tab in your Chrome DevTools that let's you move and rotate the virtual head-mounted display so that you can confirm that the view is properly being updated.

You can select either 'Translate' or 'Rotate' at the bottom of the window and then use the mouse to change the position or rotation of the virtual display, shown in the window. This should change the values being reported by the emulated headset, which we will see in action at the end of the codelab.

Viewing the site on mobile

If you have an Android device plugged in, you can (on your desktop) head to the URL: chrome://inspect (you'll need to copy and paste this). Then, set up a port forward that maps a port to your local web server. Hit enter and this will save.

Now, you should be able to access the demo at http://localhost:8887/ on your mobile phone.

While the API is experimental, the features are actually behind flags. There are two relevant flags. To enable WebVR itself you can visit chrome://flags/#enable-webvr and click 'Enable' on the 'WebVR' option. There is also a flag that allows you to use VR controllers via the Gamepad API. That flag is called 'Gamepad Extensions' and is at chrome://flags/#enable-gamepad-extensions, though this particular codelab doesn't use it.

Let's start adding the VR features to our demo. Right now the demo is almost entirely in the demo.js file, which contains a single class called Demo. The way that we are going to add VR functionality is to create a new class, DemoVR which extends Demo and overrides certain behavior. This lets us concentrate on the VR functionality in a separate file.

First off we are going to create a new file called demo-vr.js with an empty DemoVR class. Place this file in the same folder as demo.js. Almost all of our code will go in this new file.

demo-vr.js

class DemoVR extends Demo {
}

Then we need to plumb this into the rest of the code. First we need to load the new script after demo.js, where the Demo class is declared, but before app.js where our new class will be used.

index.html

<script src="demo.js"></script>
/* Add the line below between the script elements for demo.js and app.js */
<script src="demo-vr.js"></script>
<script src="app.js"></script>

For browsers that don't support the WebVR API our code already gives us good fallback content, and all of the WebVR related code is going to be in the new class. So we can use feature detection to decide which class to use and get progressive enhancement essentially for free.

app.js

if ('getVRDisplays' in navigator) {
  new DemoVR();
} else {
  new Demo();
}

Here we use the existence of the navigator.getVRDisplays() method as our feature detection.

Finding displays

Now we know if the API is available, we need to find out if there is a suitable display attached. Let's add a method to the class and call it from the constructor.

demo-vr.js

class DemoVR extends Demo {
  constructor() {
    super();

    this._getDisplay = this._getDisplay.bind(this);

    // If a display is connected or disconnected then we need to check that we
    // are still using a valid display
    window.addEventListener('vrdisplayconnect', this._getDisplay);
    window.addEventListener('vrdisplaydisconnect', this._getDisplay);

    this._display = null;

    this._getDisplay();
  }

  // Choose the first available VR display
  _getDisplay() {
    navigator.getVRDisplays().then((displays) => {
      displays = displays.filter(display => display.capabilities.canPresent);
      if (displays.length === 0) {
        this._display = null;
      } else {
        this._display = displays[0];
      }
      console.log(`Current display: ${this._display}`);
    });
  }
}

Let's take a moment to review what is going on here. In the _getDisplay method we are making our first API call. navigator.getVRDisplays returns a Promise that resolves to an array of available displays. If the array is not empty we just choose the first one. Otherwise we set this._display to null. Note that we filter the returned list and check that the display objects have the capabilities.canPresent property set to true. Some devices may support only parts of the WebVR API, and we need a fully immersive headset.

In the constructor we initialise this._display to be null and bind the _getDisplay method so that it can be used as a callback without losing its context.

We also set up event handlers for two new WebVR events. vrdisplayconnect and vrdisplaydisconnect will be fired if the browser detects that a VR display has been plugged in or unplugged. We can use this as a signal that we should check again for a VR display, so the events use the _getDisplay method as their handler.

If you try this out and open up DevTools, you should see a message in the console telling you about the display that was found. If you don't get any displays, take a look again at the Debugging section to check that everything is set up correctly.

Now that we have access to a VR display we can finally present our scene to it.

To prevent certain kinds of abuse, you may only start presenting to the display while responding to a 'user gesture' - that is, a click or some similar interaction from the user.

We'll create a button for entering VR mode, and use the button's click event as our gesture.

Let's add the button element and its styles to index.html. The file should now look like this:

index.html

<html>
  <head>
    <title>WebVR Codelab</title>
    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width">
    <style>
      html, body {
        padding: 0;
        margin: 0;
        user-select: none;
        overflow: hidden;
      }

      .vr-toggle {
        position: fixed;
        top: 16px;
        left: 16px;
        background: #9900AA;
        color: #FFF;
        padding: 12px 16px;
        border-radius: 3px;
        border: none;
        font-size: 14px;
      }
    </style>
  </head>
  <body>
    <div id="container"></div>
    <button id="vr-button" class="vr-toggle" style="display: none;">Enable VR</button>
    <script src="third_party/three.min.js"></script>
    <script src="demo.js"></script>
    <script src="demo-vr.js"></script>
    <script src="app.js"></script>
  </body>
</html>

Add a click event handler in the DemoVR class that handles toggling presentation mode.

demo-vr.js

constructor() {
  ...

    this._togglePresent = this._togglePresent.bind(this);
    this._button = document.getElementById('vr-button');
    this._button.addEventListener('click', this._togglePresent);
    this._button.style.display = '';

  ...
}

_togglePresent() {
  if (this._display) {
    if (this._display.isPresenting) {
      this._deactivateVR();
    } else {
      this._activateVR();
    }
  }  
}

Now it's time to hook up the _activateVR and _deactivateVR methods. To ask the display to activate VR presentation mode we need to call the requestPresent method. This method takes an array of things that we want to present to the display. Confusingly, however, the current spec states that there can only be one item, and that each item is an object with only one property. This is to allow the spec to be easily extended in the future.

The important thing to remember here is that the source property of our single layer needs to be a canvas element that has a WebGL context active. For us, this is going to be the ThreeJS renderer's domElement property.

demo-vr.js

_activateVR() {
  if (this._display && !this._display.isPresenting) {
    this._display.requestPresent([{
      source: this._renderer.domElement
    }]);
  }
}

To deactivate the VR display we call exitPresent, and toggle the button text back when it finishes.

demo-vr.js

_deactivateVR() {
  if (this._display && this._display.isPresenting) {
    this._display.exitPresent();
  }
}

Finally, we want to toggle the text of the button when we enter or exit presentation mode. Clicking our button is not the only way that this could happen - on a daydream device, for example, putting the phone into the headset can trigger presentation mode. We can listen for changes in presentation status with the displaypresentchange event.

demo-vr.js

constructor() {
  ...

  this._presentChanged = this._presentChanged.bind(this);
  window.addEventListener('vrdisplaypresentchange', this._presentChanged);

  ...
}

_presentChanged() {
  if (this._display && this._display.isPresenting) {
    this._button.textContent = 'Exit VR';      
  } else {
    this._button.textContent = 'Enable VR';
  }
}

If you try this out on desktop using the Chrome extension then you should see that nothing actually changes except the button text. If you try it on a real device, however, you should see that something does indeed get presented to the display, but that it doesn't look right at all!

The reason for this is that a VR display expects you to send two separate images, one for each eye, and the way that the WebVR API handles this is to split the canvas in half and present each half to the corresponding eye. The VR Chrome extension just shows them side-by-side so you don't notice the difference, but on real hardware the display will apply a lens-correcting distortion to each image.

When you are presenting content to a WebVR display you must always keep in mind that the VR display is not necessarily the same physical device as the page is being shown on. For an external head-mounted display the display might have a completely different screen size and refresh rate, and we should use those values.

Resize the canvas and set the display properties

We can get the correct size of the display by looking at display.leftEyeParameters and display.rightEyeParameters. These each have properties called renderWidth and renderHeight that we can use to calculate the correct size for our canvas. In theory each eye could have a different width and height to the other, so we use whichever is bigger to determine the canvas size. Since the different eye images are side by side, the canvas needs to be twice as wide as an individual eye.

demo-vr.js

_onResize() {
  if (!this._display || !this._display.isPresenting) {
    return super._onResize();
  }

  const leftEye = this._display.getEyeParameters('left');
  const rightEye = this._display.getEyeParameters('right');

  this._width = Math.max(leftEye.renderWidth, rightEye.renderWidth) * 2;
  this._height = Math.max(leftEye.renderHeight, rightEye.renderHeight);

  this._renderer.setSize(this._width, this._height, false);
}

Now we can extend our _presentChanged handler to deal with some other minor differences between presenting and not presenting.

If we are starting to present then we need to set two depth parameters on the display so that they match the current values of our ThreeJS camera. The depth parameters specify which parts of our scene are either too close or too far away from the camera to be drawn. WebVR needs to know these so that it can generate us a valid perspective matrix - which we'll cover in the next section. SInce we will be manually updating the camera matrices, we need to turn off the automatic update that happens by default.

Another problem that we need to solve is that, by default, ThreeJS will clear the whole canvas each time you call render, and we need to call it twice per frame to get both eyes. So we need to disable the default behavior and handle clearing the canvas ourselves. We do this by toggling this._renderer.autoClear.

Finally, we need to make sure that our resize handler gets called every time the presentation status changes. Your _presentChanged handler should now look like this:

demo-vr.js

_presentChanged() {
  if (this._display && this._display.isPresenting) {
    this._button.textContent = 'Exit VR';
    this._renderer.autoClear = false;
    this._display.depthNear = this._camera.near;
    this._display.depthFar = this._camera.far;
    this._camera.matrixAutoUpdate = false;
  } else {
    this._button.textContent = 'Enable VR';
    this._renderer.autoClear = true;
    this._camera.matrixAutoUpdate = true;
  }

  this._onResize();
}

Change the _render method to correctly draw the scene in stereo

Now we finally get to the part where we draw the scene in stereo. We override the _render method from the base class, but fall back to using it when we aren't presenting.

As we mentioned in the last section, we need to override the default canvas clearing. We clear the canvas manually at the beginning of the render before we draw the left eye, and we still need to clear the depth buffer before drawing the right eye.

The _renderEye method takes an x position to identify which part of the canvas to draw on. Passing a 0 means that the scene will be drawn from the left edge to the middle, while passing half the canvas width means that the scene will be drawn from the middle to the right edge.

Another consequence of our VR display not being the same as our page display is that we can't use window.requestAnimationFrame to handle the timing of our render loop. Most regular displays these days run at 60 frames per second, while VR displays might use 90 or even 120 frames per second. So instead we change the call to use this._display.requestAnimationFrame.

demo-vr.js

_render() {
  // If we aren't presenting to a display then do the non-VR render
  if (!this._display || !this._display.isPresenting) {
    return super._render();
  }

  // Clear the canvas manually.
  this._renderer.clear();

  // Left eye.
  this._renderEye(0);

  // Ensure that left eye calcs aren't going to interfere with right eye ones.
  this._renderer.clearDepth();

  // Right eye.
  this._renderEye(this._width / 2);

  this._display.requestAnimationFrame(this._update);
}

_renderEye(x) {
  this._renderer.setViewport(x, 0, this._width / 2, this._height);
  this._renderer.render(this._scene, this._camera);
}

Testing this out now this should start to look almost correct. There are just two closely related problems to fix - at the moment the view is exactly the same for both eyes, and it doesn't change when you move your head.

The WebVR API provides us with position and orientation data in a VRFrameData object. For performance reasons we need to create a single VRFrameData object when our program starts, and then pass it to the API on each frame so that the values can be updated. This saves on the relatively expensive operation of creating an object in each frame.

In our constructor we can add a line to create our frame data, and then update the camera using the data in our _render method. The frame data includes two matrices for each eye. One is a view matrix, which represents the position and orientation of the eye relative to the center of the headset. The other is the projection matrix, which represents each eye's field of view. We can use these two matrices to update our ThreeJS camera.

demo-vr.js

constructor() {
  ...

  this._frameData = new VRFrameData();

  ...
}

_updateCamera(viewMatrix, projectionMatrix) {
  this._camera.projectionMatrix.fromArray(projectionMatrix);
  this._camera.matrix.fromArray(viewMatrix);
  this._camera.matrix.getInverse(this._camera.matrix);
  this._camera.updateMatrixWorld(true);
}

In our renderer we then just need to update the frame data at the beginning of the frame, update the camera for each eye before we render, and then call submitFrame. The finished _render method should look like this:

demo-vr.js

_render() {
  // If we aren't presenting to a display then do the non-VR render
  if (!this._display || !this._display.isPresenting) {
    return super._render();
  }

  this._display.getFrameData(this._frameData);

  // Clear the canvas manually.
  this._renderer.clear();

  // Left eye.
  this._updateCamera(this._frameData.leftViewMatrix, this._frameData.leftProjectionMatrix);
  this._renderEye(0);

  // Ensure that left eye calcs aren't going to interfere with right eye ones.
  this._renderer.clearDepth();

  // Right eye.
  this._updateCamera(this._frameData.rightViewMatrix, this._frameData.rightProjectionMatrix);
  this._renderEye(this._width / 2);

  this._display.submitFrame();
  this._display.requestAnimationFrame(this._update);
}

The call to submitFrame tells the VR hardware that we are done with this frame and are ready to begin processing another. This is the signal that lets the display know that we have finished drawing to the canvas and that we are ready for fresh position and orientation data. Getting this data from the hardware can be expensive, so the browser is allowed to give you the same data over and over again until you call submitFrame. This can be a difficult bug to spot, because the browser is not required to behave this way, so it may change between your test device and your users' devices.

Now if you use the WebVR extension to move the virtual headset, or use an attached mobile device, you should see the view change with the movement. We're all done!