Web Capabilities

We want to close the capability gap between the web and native and make it easy for developers to build great experiences on the open web. We strongly believe that every developer should have access to the capabilities they need to make a great web experience, and we are committed to a more capable web.

There are, however, some capabilities, like file system access, idle detection, and more that are available to native but that aren't available on the web. These missing capabilities mean some types of apps can't be delivered on the web, or are less useful.

We will design and develop these new capabilities in an open and transparent way, using the existing open web platform standards processes while getting early feedback from developers and other browser vendors as we iterate on the design, to ensure an interoperable design.

What you will build

In this codelab, you'll play around with a number of brand-new—or sometimes even only "available-behind-a-flag"—web APIs. So rather than on a finished product, this codelab's focus is more on the actual APIs themselves and on use cases that these APIs unlock.

What you'll learn

This codelab will teach you the basic mechanics of several bleeding edge APIs. Note that these mechanics aren't set in stone quite yet, and we very much appreciate your feedback on the developer flow.

What you'll need

As the APIs featured in this codelab are really bleeding edge, requirements for each API will vary. Please make sure to carefully read the compatibility information at the beginning of each section.

How to approach the codelab

The codelab is not necessarily meant to be worked through sequentially. Each section represents an independent API, so feel free to cherry pick what interests you the most.

Download the code

Click the following link to download all the code for this codelab:

Download source code

Unpack the downloaded zip file. This will unpack a root folder (web-capabilities-master), which contains one folder for each step of this codelab, along with all of the resources you will need.

The goal of the Badging API is to bring users' attention to things that happen in the background. For the sake of simplicity of the demo in this code lab, let's use the API to bring users' attention to something that's happening in the foreground. You can then make the mental transfer to things that happen in the background.

Install Airhorner

For this API to work, you need a PWA that is installed to the home screen, so the first step is to install one, for example, the infamous, world-famous airhorner.com. Hit the "Install" button in the top right corner or use the three dots menu to install manually.

This will show a confirmation prompt, click "Install".

You now have a new icon on your operating system's dock. Click it to launch the PWA. It will have its own app window and run in standalone mode.

Setting a badge

Now that you have a PWA installed, you need some numeric data (badges can only contain numbers) to display on a badge. A straightforward thing to count in The Air Horner™ is, sigh, the number of times it has been horned. In order to get this set up, right-click anywhere in the PWA's window and open the Chrome Developer Tools (DevTools) by selecting "Inspect" in the menu.

In the popped up DevTools window, navigate to the Console tab and paste the code below and hit return.

let hornCounter = 0;
const horn = document.querySelector('.horn');
horn.addEventListener('click', () => {
  ExperimentalBadge.set(++hornCounter);
});

Sound the airhorn a couple of times and check the PWA's icon: it will update every. single. time. the airhorn sounds. As easy as that.

Clearing a badge

Let's reset the counter. Again in the DevTools Console tab, paste the line below and hit return.

window.ExperimentalBadge.set(0);

Alternatively, you can also get rid of the badge by explicitly clearing it as shown in the following snippet. Your PWA's icon should now look again like at the beginning, clear and without badge.

window.ExperimentalBadge.clear()

Feedback

What did you think of this API? Please help us by briefly responding to this survey:

Was this API intuitive to use?

Yes No

Did you get the example to run?

Yes No

Got more to say? Were there missing features? Please provide quick feedback in this survey. Thank you!

The Shape Detection API provides access to accelerated shape detectors (e.g., for human faces) and works on still images and/or live image feeds. Operating systems have performant and highly optimized feature detectors such as the Android FaceDetector. The Shape Detection API opens up these native implementations and exposes them through a set of JavaScript interfaces.

Currently, the supported features are face detection through the FaceDetector interface, barcode detection through the BarcodeDetector interface, and text detection (Optical Character Recognition) through the TextDetector interface.

Face Detection

A fascinating feature of the Shape Detection API is face detection. In order to test it, we need a page with faces, this page with my (i.e., @tomayac's) face is a good start. It will look something like in the screenshot below. On a supported browser, the boundary box of my face as well as the face landmarks will be recognized.

You can see how little code was required to make this happen by remixing or editing the Glitch project, especially the script.js file.

If you want to go fully dynamic and not just work with my face, in your browser, navigate to this Google Search Engine Results Page full of faces in a private tab or in guest mode. Now on that page, open the Chrome Developer Tools by right-clicking anywhere and then clicking on "Inspect". Next, on the Console tab, paste in the snippet below. The code will highlight detected faces with a semi-transparent red box.

document.querySelectorAll('img[alt]:not([alt=""])').forEach(async (img) => {
  try {
    const faces = await new FaceDetector().detect(img);
    faces.forEach(face => {
      const div = document.createElement('div');
      const box = face.boundingBox;
      const computedStyle = getComputedStyle(img);
      const [top, right, bottom, left] = [
        computedStyle.marginTop,
        computedStyle.marginRight,
        computedStyle.marginBottom,
        computedStyle.marginLeft
      ].map(m => parseInt(m, 10));
      const scaleX = img.width / img.naturalWidth;
      const scaleY = img.height / img.naturalHeight;
      div.style.backgroundColor = 'rgba(255, 0, 0, 0.5)';
      div.style.position = 'absolute';
      div.style.top = `${scaleY * box.top + top}px`;
      div.style.left = `${scaleX * box.left + left}px`;
      div.style.width = `${scaleX * box.width}px`;
      div.style.height = `${scaleY * box.height}px`;
      img.before(div);
    });
  } catch(e) {
    console.error(e);
  }
});

You will note that there are some DOMExceptions and not all images are being processed. This is because the above the fold images are inlined as data URIs and can thus be accessed, whereas the below the fold images come from a different domain that isn't configured to support CORS. For the sake of the demo, we don't need to worry about this.

Face landmark detection

For bonus points, in addition to just faces per se, macOS also supports the detection of face landmarks. To test it, paste the following snippet into the Console. Reminder: the line-up of the landmarks isn't perfect at all due to crbug.com/914348, but you can see where this is headed and how powerful this feature can be.

document.querySelectorAll('img[alt]:not([alt=""])').forEach(async (img) => {
  try {
    const faces = await new FaceDetector().detect(img);
    faces.forEach(face => {
      const div = document.createElement('div');
      const box = face.boundingBox;
      const computedStyle = getComputedStyle(img);
      const [top, right, bottom, left] = [
        computedStyle.marginTop,
        computedStyle.marginRight,
        computedStyle.marginBottom,
        computedStyle.marginLeft
      ].map(m => parseInt(m, 10));
      const scaleX = img.width / img.naturalWidth;
      const scaleY = img.height / img.naturalHeight;
      div.style.backgroundColor = 'rgba(255, 0, 0, 0.5)';
      div.style.position = 'absolute';
      div.style.top = `${scaleY * box.top + top}px`;
      div.style.left = `${scaleX * box.left + left}px`;
      div.style.width = `${scaleX * box.width}px`;
      div.style.height = `${scaleY * box.height}px`;
      img.before(div);

      const landmarkSVG = document.createElementNS('http://www.w3.org/2000/svg', 'svg');
      landmarkSVG.style.position = 'absolute';
      landmarkSVG.classList.add('landmarks');
      landmarkSVG.setAttribute('viewBox', `0 0 ${img.width} ${img.height}`);
      landmarkSVG.style.width = `${img.width}px`;
      landmarkSVG.style.height = `${img.height}px`;
      face.landmarks.map((landmark) => {                    
        landmarkSVG.innerHTML += `<polygon class="landmark-${landmark.type}" points="${
        landmark.locations.map((point) => {          
          return `${scaleX * point.x},${scaleY * point.y} `;
        }).join(' ')
      }" /></svg>`;          
      });
      div.before(landmarkSVG);
    });
  } catch(e) {
    console.error(e);
  }
});

Barcode Detection

The second feature of the Shape Detection API is barcode detection. Similar as before, we need a page with barcodes, for example this one will do the job. When you open it on a supporting browser, you will see the various QR codes deciphered. Remix or edit the Glitch project, especially the script.js file to see how it's done.

If you want something more dynamic, we can again use Google Image Search. This time, in your browser navigate to this Google Search Engine Results Page in a private tab or in guest mode. Now paste the snippet below in the Chrome DevTools Console tab. After a short moment the recognized barcodes will be annotated with the raw value and the barcode type.

document.querySelectorAll('img[alt]:not([alt=""])').forEach(async (img) => {
  try {
    const barcodes = await new BarcodeDetector().detect(img);
    barcodes.forEach(barcode => {
      const div = document.createElement('div');
      const box = barcode.boundingBox;
      const computedStyle = getComputedStyle(img);
      const [top, right, bottom, left] = [
        computedStyle.marginTop,
        computedStyle.marginRight,
        computedStyle.marginBottom,
        computedStyle.marginLeft
      ].map(m => parseInt(m, 10));
      const scaleX = img.width / img.naturalWidth;
      const scaleY = img.height / img.naturalHeight;
      div.style.backgroundColor = 'rgba(255, 255, 255, 0.75)';
      div.style.position = 'absolute';
      div.style.top = `${scaleY * box.top + top}px`;
      div.style.left = `${scaleX * box.left - left}px`;
      div.style.width = `${scaleX * box.width}px`;
      div.style.height = `${scaleY * box.height}px`;
      div.style.color = 'black';
      div.style.fontSize = '14px';      
      div.textContent = `${barcode.rawValue}`;
      img.before(div);
    });
  } catch(e) {
    console.error(e);
  }
});

Text Detection

The final feature of the Shape Detection API is text detection. By now you know the drill, we need a page with images that contain text, like this one with Google Books scan results. On supported browsers, you will see the text recognized and a bounding box drawn around text passages. Remix or edit the Glitch project, especially the script.js file to see how it's done.

For testing this dynamically, head over to this Search Engine Results Page in a private tab or in guest mode. Now paste the snippet below in the Chrome DevTools Console tab and with a little bit of waiting some of the text will be recognized.

document.querySelectorAll('img[alt]:not([alt=""])').forEach(async (img) => {
  try {
    const texts = await new TextDetector().detect(img);
    texts.forEach(text => {
      const div = document.createElement('div');
      const box = text.boundingBox;
      const computedStyle = getComputedStyle(img);
      const [top, right, bottom, left] = [
        computedStyle.marginTop,
        computedStyle.marginRight,
        computedStyle.marginBottom,
        computedStyle.marginLeft
      ].map(m => parseInt(m, 10));
      const scaleX = img.width / img.naturalWidth;
      const scaleY = img.height / img.naturalHeight;
      div.style.backgroundColor = 'rgba(255, 255, 255, 0.75)';
      div.style.position = 'absolute';
      div.style.top = `${scaleY * box.top + top}px`;
      div.style.left = `${scaleX * box.left - left}px`;
      div.style.width = `${scaleX * box.width}px`;
      div.style.height = `${scaleY * box.height}px`;
      div.style.color = 'black';
      div.style.fontSize = '14px';      
      div.innerHTML = text.rawValue;
      img.before(div);
    });
  } catch(e) {
    console.error(e);
  }
});

Feedback

What did you think of this API? Please help us by briefly responding to this survey:

Was this API intuitive to use?

Yes No

Did you get the example to run?

Yes No

Got more to say? Were there missing features? Please provide quick feedback in this survey. Thank you!

The Web Share Target API allows installed web apps to register with the underlying operating system as a share target to receive shared content from either the Web Share API or system events, like the operating-system-level share button.

Install a PWA to share to

As a first step, you need a PWA that you can share to. This time, Airhorner (luckily) won't do the job, but the Web Share Target Demo App has your back. Install the app to your device's home screen.

Share something to the PWA

Next, you need something to share, for example, a Google I/O session. On your device's browser, navigate to the Unlocking New Capabilities for the Web session. Then, in the three dot menu, tap "Share".

Finally, in the share sheet, locate the Web Share Target Demo App you have installed before.

When you tap the app icon, you will then land straight in the app, with in this case the title and text fields now populated. You will notice that the url field is empty. This is a consequence of Android not supporting URLs in its share system. The spec recommends to parse the text field for potential URLs, which the demo has skipped for the sake of brevity until crbug.com/789379 gets resolved.

So how does this work? In order to find out, explore the Web Share Target Demo App's Web App Manifest. The configuration to make the Web Share Target API work is located in the "share_target" property of the manifest that in its "action" field points to a URL that gets decorated with parameters as listed in "params".

The sharing side then populates this URL template accordingly (either facilitated through the browser via the three dot menu, or—better—controlled programmatically by the developer using the Web Share API), so that the receiving side can then extract the parameters and do something with them, for example, just display them as you can see in the highlighted area in the screenshot.

{
  "name": "Web Share Target Test App",
  [...]
  "share_target": {
    "action": "sharetarget.html",
    "params": {
      "title": "title",
      "text": "text",
      "url": "url"
    }
  },
  [...]
}

Feedback

What did you think of this API? Please help us by briefly responding to this survey:

Was this API intuitive to use?

Yes No

Did you get the example to run?

Yes No

Got more to say? Were there missing features? Please provide quick feedback in this survey. Thank you!

To avoid draining the battery, most devices quickly go to sleep when left idle. While this is fine most of the time, some applications need to keep the screen or the device awake in order to complete their work. The Wake Lock API provides a way to prevent the device from dimming and locking the screen or prevent the device from going to sleep. This capability enables new experiences that, until now, required a native app.

Set up a screensaver

In order to test the Wake Lock API, you must first ensure that your device would go to sleep. Therefore, in your operating system's preference pane, activate a screensaver of your choice and make sure it kicks in after 1min. Make sure it works by leaving your device alone for exactly that time (yeah, I know, it's painful). The screenshots below show macOS, but you can of course try this on your mobile Android device or any other supported desktop platform.

Set a screen wake lock

Now that you know that your screensaver is working, you'll use a wake lock of type "screen" to prevent the screensaver from doing its job. Head over to the Wake Lock Demo App and click the "Start" button on the right.

Starting from that moment, a wake lock is active and you should see a countup.

If you're again patient enough to leave your device untouched for a minute, you will now see that the screensaver indeed didn't kick in and the countup will happily continue counting beyond 60s.

So how does this work? In order to find out, head over to the Glitch project for the Wake Lock Demo App and check out wakelock.js. The gist of the code is in the snippet below. Open a new tab (or use any random tab that you happen to have open) and copy and paste the code below in a Chrome Developer Tools console. You should see a wake lock that's active for exactly 70s (see the console logs), and your screensaver shouldn't kick in.

(async () => {
  if ('getWakeLock' in navigator) {
    try {
      let wakeLock = await navigator.getWakeLock('screen');
      wakeLock.addEventListener('activechange', (e) => {
        console.log(e.target);
      });
      let wakeLockRequest = wakeLock.createRequest();
      setTimeout(() => {
        wakeLockRequest.cancel();
        wakeLockRequest = null;
        return;
      }, 70 * 1000); 
    } catch (e) {
      console.error(e);
    }
  }
})();

There's another type of wake lock that you can test: if you request a type "system" wake lock, it prevents your device's CPU from entering standby mode so that your app can continue running. Note that the screen may turn off while a "system" wake lock is active.

(async () => {
  if ('getWakeLock' in navigator) {
    try {
      let wakeLock = await navigator.getWakeLock('system');
      wakeLock.addEventListener('activechange', (e) => {
        console.log(e.target);
      });
      let wakeLockRequest = wakeLock.createRequest();
      setTimeout(() => {
        wakeLockRequest.cancel();
        wakeLockRequest = null;
        return;
      }, 70 * 1000); 
    } catch (e) {
      console.error(e);
    }
  }
})();

Feedback

What did you think of this API? Please help us by briefly responding to this survey:

Was this API intuitive to use?

Yes No

Did you get the example to run?

Yes No

Got more to say? Were there missing features? Please provide quick feedback in this survey. Thank you!

Congratulations, you've made it to the end of the codelab. Again a kind reminder that most of the APIs are still in flux and actively being worked on. Therefore, the team really appreciate your feedback, as only interaction with people like you will help us get these APIs right.

I also encourage you to have a frequent look at our Capabilities landing page, we will keep it up-to-date and it has pointers to all the in-depth articles for the APIs we work on. Keep rockin'!

Tom and the entire Capabilities team 🐡