Real time communication with WebRTC

1. Introduction

WebRTC is an open source project to enable realtime communication of audio, video and data in Web and native apps.

WebRTC has several JavaScript APIs — click the links to see demos.

Where can I use WebRTC?

In Firefox, Opera and in Chrome on desktop and Android. WebRTC is also available for native apps on iOS and Android.

What is signaling?

WebRTC uses RTCPeerConnection to communicate streaming data between browsers, but also needs a mechanism to coordinate communication and to send control messages, a process known as signaling. Signaling methods and protocols are not specified by WebRTC. In this codelab you will use Socket.IO for messaging, but there are many alternatives.

What are STUN and TURN?

WebRTC is designed to work peer-to-peer, so users can connect by the most direct route possible. However, WebRTC is built to cope with real-world networking: client applications need to traverse NAT gateways and firewalls, and peer to peer networking needs fallbacks in case direct connection fails. As part of this process, the WebRTC APIs use STUN servers to get the IP address of your computer, and TURN servers to function as relay servers in case peer-to-peer communication fails. (WebRTC in the real world explains in more detail.)

Is WebRTC secure?

Encryption is mandatory for all WebRTC components, and its JavaScript APIs can only be used from secure origins (HTTPS or localhost). Signaling mechanisms aren't defined by WebRTC standards, so it's up to you make sure to use secure protocols.

2. Overview

Build an app to get video and take snapshots with your webcam and share them peer-to-peer via WebRTC. Along the way you'll learn how to use the core WebRTC APIs and set up a messaging server using Node.js.

What you'll learn

  • Get video from your webcam
  • Stream video with RTCPeerConnection
  • Stream data with RTCDataChannel
  • Set up a signaling service to exchange messages
  • Combine peer connection and signaling
  • Take a photo and share it via a data channel

What you'll need

  • Chrome 47 or above
  • Web Server for Chrome, or use your own web server of choice.
  • The sample code
  • A text editor
  • Basic knowledge of HTML, CSS and JavaScript

3. Get the sample code

Download the code

If you're familiar with git, you can download the code for this codelab from GitHub by cloning it:

git clone https://github.com/googlecodelabs/webrtc-web

Alternatively, click the following button to download a .zip file of the code:

Open the downloaded zip file. This will unpack a project folder (adaptive-web-media) that contains one folder for each step of this codelab, along with all of the resources you will need.

You'll be doing all your coding work in the directory named work.

The step-nn folders contain a finished version for each step of this codelab. They are there for reference.

Install and verify web server

While you're free to use your own web server, this codelab is designed to work well with the Chrome Web Server. If you don't have that app installed yet, you can install it from the Chrome Web Store.

6ddeb4aee53c0f0e.png

After installing the Web Server for Chrome app, click on the Chrome Apps shortcut from the bookmarks bar, a New Tab page, or from the App Launcher:

1d2b4aa977ab7e24.png

Click on the Web Server icon:

27fce4494f641883.png

Next, you'll see this dialog, which allows you to configure your local web server:

Screen Shot 2016-02-18 at 11.48.14 AM.png

Click the CHOOSE FOLDER button, and select the work folder you just created. This will enable you to view your work in progress in Chrome via the URL highlighted in the Web Server dialog in the Web Server URL(s) section.

Under Options, check the box next to Automatically show index.html as shown below:

Screen Shot 2016-02-18 at 11.56.30 AM.png

Then stop and restart the server by sliding the toggle labeled Web Server: STARTED to the left and then back to the right.

Screen Shot 2016-02-18 at 12.22.18 PM.png

Now visit your work site in your web browser by clicking on the highlighted Web Server URL. You should see a page that looks like this, which corresponds to work/index.html:

18a705cb6ccc5181.png

Obviously, this app is not yet doing anything interesting — so far, it's just a minimal skeleton we're using to make sure your web server is working properly. You'll add functionality and layout features in subsequent steps.

4. Stream video from your webcam

What you'll learn

In this step you'll find out how to:

  • Get a video stream from your webcam.
  • Manipulate stream playback.
  • Use CSS and SVG to manipulate video.

A complete version of this step is in the step-01 folder.

A dash of HTML...

Add a video element and a script element to index.html in your work directory:

<!DOCTYPE html>
<html>

<head>

  <title>Realtime communication with WebRTC</title>

  <link rel="stylesheet" href="css/main.css" />

</head>

<body>

  <h1>Realtime communication with WebRTC</h1>

  <video autoplay playsinline></video>

  <script src="js/main.js"></script>

</body>

</html>

...and a pinch of JavaScript

Add the following to main.js in your js folder:

'use strict';

// On this codelab, you will be streaming only video (video: true).
const mediaStreamConstraints = {
  video: true,
};

// Video element where stream will be placed.
const localVideo = document.querySelector('video');

// Local stream that will be reproduced on the video.
let localStream;

// Handles success by adding the MediaStream to the video element.
function gotLocalMediaStream(mediaStream) {
  localStream = mediaStream;
  localVideo.srcObject = mediaStream;
}

// Handles error by logging a message to the console with the error message.
function handleLocalMediaStreamError(error) {
  console.log('navigator.getUserMedia error: ', error);
}

// Initializes media stream.
navigator.mediaDevices.getUserMedia(mediaStreamConstraints)
  .then(gotLocalMediaStream).catch(handleLocalMediaStreamError);

Try it out

Open index.html in your browser and you should see something like this (featuring the view from your webcam, of course!):

9297048e43ed0f3d.png

How it works

Following the getUserMedia() call, the browser requests permission from the user to access their camera (if this is the first time camera access has been requested for the current origin). If successful, a MediaStream is returned, which can be used by a media element via the srcObject attribute:

navigator.mediaDevices.getUserMedia(mediaStreamConstraints)
  .then(gotLocalMediaStream).catch(handleLocalMediaStreamError);


}
function gotLocalMediaStream(mediaStream) {
  localVideo.srcObject = mediaStream;
}

The constraints argument allows you to specify what media to get. In this example, video only, since audio is disabled by default:

const mediaStreamConstraints = {
  video: true,
};

You can use constraints for additional requirements such as video resolution:

const hdConstraints = {
  video: {
    width: {
      min: 1280
    },
    height: {
      min: 720
    }
  }
}

The MediaTrackConstraints specification lists all potential constraint types, though not all options are supported by all browsers. If the resolution requested isn't supported by the currently selected camera, getUserMedia() will be rejected with an OverconstrainedError and the user will not be prompted to give permission to access their camera.

If getUserMedia() is successful, the video stream from the webcam is set as the source of the video element:

function gotLocalMediaStream(mediaStream) {
  localVideo.srcObject = mediaStream;
}

Bonus points

  • The localStream object passed to getUserMedia() is in global scope, so you can inspect it from the browser console: open the console, type stream and press Return. (To view the console in Chrome, press Ctrl-Shift-J, or Command-Option-J if you're on a Mac.)
  • What does localStream.getVideoTracks() return?
  • Try calling localStream.getVideoTracks()[0].stop().
  • Look at the constraints object: what happens when you change it to {audio: true, video: true}?
  • What size is the video element? How can you get the video's natural size from JavaScript, as opposed to display size? Use the Chrome Dev Tools to check.
  • Try adding CSS filters to the video element. For example:
video {
  filter: blur(4px) invert(1) opacity(0.5);
}
  • Try adding SVG filters. For example:
video {
   filter: hue-rotate(180deg) saturate(200%);
 }

What you learned

In this step you learned how to:

  • Get video from your webcam.
  • Set media constraints.
  • Mess with the video element.

A complete version of this step is in the step-01 folder.

Tips

  • Don't forget the autoplay attribute on the video element. Without that, you'll only see a single frame!
  • There are lots more options for getUserMedia() constraints. Take a look at the demo at webrtc.github.io/samples/src/content/peerconnection/constraints. As you'll see, there are lots of interesting WebRTC samples on that site.

Best practice

  • Make sure your video element doesn't overflow its container. We've added width and max-width to set a preferred size and a maximum size for the video. The browser will calculate the height automatically:
video {
  max-width: 100%;
  width: 320px;
}

Next up

You've got video, but how do you stream it? Find out in the next step!

5. Stream video with RTCPeerConnection

What you'll learn

In this step you'll find out how to:

  • Abstract away browser differences with the WebRTC shim, adapter.js.
  • Use the RTCPeerConnection API to stream video.
  • Control media capture and streaming.

A complete version of this step is in the step-2 folder.

What is RTCPeerConnection?

RTCPeerConnection is an API for making WebRTC calls to stream video and audio, and exchange data.

This example sets up a connection between two RTCPeerConnection objects (known as peers) on the same page.

Not much practical use, but good for understanding how RTCPeerConnection works.

Add video elements and control buttons

In index.html replace the single video element with two video elements and three buttons:

<video id="localVideo" autoplay playsinline></video>
<video id="remoteVideo" autoplay playsinline></video>


<div>
  <button id="startButton">Start</button>
  <button id="callButton">Call</button>
  <button id="hangupButton">Hang Up</button>
</div>

One video element will display the stream from getUserMedia()and the other will show the same video streamed via RTCPeerconnection. (In a real world application, one video element would display the local stream and the other the remote stream.)

Add the adapter.js shim

Add a link to the current version of adapter.js above the link to main.js:

<script src="https://webrtc.github.io/adapter/adapter-latest.js"></script>

Index.html should now look like this:

<!DOCTYPE html>
<html>

<head>
  <title>Realtime communication with WebRTC</title>
  <link rel="stylesheet" href="css/main.css" />
</head>

<body>
  <h1>Realtime communication with WebRTC</h1>

  <video id="localVideo" autoplay playsinline></video>
  <video id="remoteVideo" autoplay playsinline></video>

  <div>
    <button id="startButton">Start</button>
    <button id="callButton">Call</button>
    <button id="hangupButton">Hang Up</button>
  </div>

  <script src="https://webrtc.github.io/adapter/adapter-latest.js"></script>
  <script src="js/main.js"></script>
</body>
</html>

Install the RTCPeerConnection code

Replace main.js with the version in the step-02 folder.

Make the call

Open index.html, click the Start button to get video from your webcam, and click Call to make the peer connection. You should see the same video (from your webcam) in both video elements. View the browser console to see WebRTC logging.

How it works

This step does a lot...

WebRTC uses the RTCPeerConnection API to set up a connection to stream video between WebRTC clients, known as peers.

In this example, the two RTCPeerConnection objects are on the same page: pc1 and pc2. Not much practical use, but good for demonstrating how the APIs work.

Setting up a call between WebRTC peers involves three tasks:

  • Create a RTCPeerConnection for each end of the call and, at each end, add the local stream from getUserMedia().
  • Get and share network information: potential connection endpoints are known as ICE candidates.
  • Get and share local and remote descriptions: metadata about local media in SDP format.

Imagine that Alice and Bob want to use RTCPeerConnection to set up a video chat.

First up, Alice and Bob exchange network information. The expression ‘finding candidates' refers to the process of finding network interfaces and ports using the ICE framework.

  1. Alice creates an RTCPeerConnection object with an onicecandidate (addEventListener('icecandidate')) handler. This corresponds to the following code from main.js:
let localPeerConnection;
localPeerConnection = new RTCPeerConnection(servers);
localPeerConnection.addEventListener('icecandidate', handleConnection);
localPeerConnection.addEventListener(
    'iceconnectionstatechange', handleConnectionChange);
  1. Alice calls getUserMedia() and adds the stream passed to that:
navigator.mediaDevices.getUserMedia(mediaStreamConstraints).
  then(gotLocalMediaStream).
  catch(handleLocalMediaStreamError);
function gotLocalMediaStream(mediaStream) {
  localVideo.srcObject = mediaStream;
  localStream = mediaStream;
  trace('Received local stream.');
  callButton.disabled = false;  // Enable call button.
}
localPeerConnection.addStream(localStream);
trace('Added local stream to localPeerConnection.');
  1. The onicecandidate handler from step 1. is called when network candidates become available.
  2. Alice sends serialized candidate data to Bob. In a real application, this process (known as signaling) takes place via a messaging service – you'll learn how to do that in a later step. Of course, in this step, the two RTCPeerConnection objects are on the same page and can communicate directly with no need for external messaging.
  3. When Bob gets a candidate message from Alice, he calls addIceCandidate(), to add the candidate to the remote peer description:
function handleConnection(event) {
  const peerConnection = event.target;
  const iceCandidate = event.candidate;

  if (iceCandidate) {
    const newIceCandidate = new RTCIceCandidate(iceCandidate);
    const otherPeer = getOtherPeer(peerConnection);

    otherPeer.addIceCandidate(newIceCandidate)
      .then(() => {
        handleConnectionSuccess(peerConnection);
      }).catch((error) => {
        handleConnectionFailure(peerConnection, error);
      });

    trace(`${getPeerName(peerConnection)} ICE candidate:\n` +
          `${event.candidate.candidate}.`);
  }
}

WebRTC peers also need to find out and exchange local and remote audio and video media information, such as resolution and codec capabilities. Signaling to exchange media configuration information proceeds by exchanging blobs of metadata, known as an offer and an answer, using the Session Description Protocol format, known as SDP:

  1. Alice runs the RTCPeerConnection createOffer() method. The promise returned provides an RTCSessionDescription: Alice's local session description:
trace('localPeerConnection createOffer start.');
localPeerConnection.createOffer(offerOptions)
  .then(createdOffer).catch(setSessionDescriptionError);
  1. If successful, Alice sets the local description using setLocalDescription() and then sends this session description to Bob via their signaling channel.
  2. Bob sets the description Alice sent him as the remote description using setRemoteDescription().
  3. Bob runs the RTCPeerConnection createAnswer() method, passing it the remote description he got from Alice, so a local session can be generated that is compatible with hers. The createAnswer() promise passes on an RTCSessionDescription: Bob sets that as the local description and sends it to Alice.
  4. When Alice gets Bob's session description, she sets that as the remote description with setRemoteDescription().
// Logs offer creation and sets peer connection session descriptions.
function createdOffer(description) {
  trace(`Offer from localPeerConnection:\n${description.sdp}`);

  trace('localPeerConnection setLocalDescription start.');
  localPeerConnection.setLocalDescription(description)
    .then(() => {
      setLocalDescriptionSuccess(localPeerConnection);
    }).catch(setSessionDescriptionError);

  trace('remotePeerConnection setRemoteDescription start.');
  remotePeerConnection.setRemoteDescription(description)
    .then(() => {
      setRemoteDescriptionSuccess(remotePeerConnection);
    }).catch(setSessionDescriptionError);

  trace('remotePeerConnection createAnswer start.');
  remotePeerConnection.createAnswer()
    .then(createdAnswer)
    .catch(setSessionDescriptionError);
}

// Logs answer to offer creation and sets peer connection session descriptions.
function createdAnswer(description) {
  trace(`Answer from remotePeerConnection:\n${description.sdp}.`);

  trace('remotePeerConnection setLocalDescription start.');
  remotePeerConnection.setLocalDescription(description)
    .then(() => {
      setLocalDescriptionSuccess(remotePeerConnection);
    }).catch(setSessionDescriptionError);

  trace('localPeerConnection setRemoteDescription start.');
  localPeerConnection.setRemoteDescription(description)
    .then(() => {
      setRemoteDescriptionSuccess(localPeerConnection);
    }).catch(setSessionDescriptionError);
}
  1. Ping!

Bonus points

  1. Take a look at chrome://webrtc-internals. This provides WebRTC stats and debugging data. (A full list of Chrome URLs is at chrome://about.)
  2. Style the page with CSS:
  • Put the videos side by side.
  • Make the buttons the same width, with bigger text.
  • Make sure the layout works on mobile.
  1. From the Chrome Dev Tools console, look at localStream, localPeerConnection and remotePeerConnection.
  2. From the console, look at localPeerConnectionpc1.localDescription. What does SDP format look like?

What you learned

In this step you learned how to:

  • Abstract away browser differences with the WebRTC shim, adapter.js.
  • Use the RTCPeerConnection API to stream video.
  • Control media capture and streaming.
  • Share media and network information between peers to enable a WebRTC call.

A complete version of this step is in the step-2 folder.

Tips

  • There's a lot to learn in this step! To find other resources that explain RTCPeerConnection in more detail, take a look at webrtc.org. This page includes suggestions for JavaScript frameworks — if you'd like to use WebRTC, but don't want to wrangle the APIs.
  • Find out more about the adapter.js shim from the adapter.js GitHub repo.
  • Want to see what the world's best video chat app looks like? Take a look at AppRTC, the WebRTC project's canonical app for WebRTC calls: app, code. Call setup time is less than 500 ms.

Best practice

  • To future-proof your code, use the new Promise-based APIs and enable compatibility with browsers that don't support them by using adapter.js.

Next up

This step shows how to use WebRTC to stream video between peers — but this codelab is also about data!

In the next step find out how to stream arbitrary data using RTCDataChannel.

6. Use RTCDataChannel to exchange data

What you'll learn

  • How to exchange data between WebRTC endpoints (peers).

A complete version of this step is in the step-03 folder.

Update your HTML

For this step, you'll use WebRTC data channels to send text between two textarea elements on the same page. That's not very useful, but does demonstrate how WebRTC can be used to share data as well as streaming video.

Remove the video and button elements from index.html and replace them with the following HTML:

<textarea id="dataChannelSend" disabled
    placeholder="Press Start, enter some text, then press Send."></textarea>
<textarea id="dataChannelReceive" disabled></textarea>

<div id="buttons">
  <button id="startButton">Start</button>
  <button id="sendButton">Send</button>
  <button id="closeButton">Stop</button>
</div>

One textarea will be for entering text, the other will display the text as streamed between peers.

index.html should now look like this:

<!DOCTYPE html>
<html>

<head>

  <title>Realtime communication with WebRTC</title>

  <link rel="stylesheet" href="css/main.css" />

</head>

<body>

  <h1>Realtime communication with WebRTC</h1>

  <textarea id="dataChannelSend" disabled
    placeholder="Press Start, enter some text, then press Send."></textarea>
  <textarea id="dataChannelReceive" disabled></textarea>

  <div id="buttons">
    <button id="startButton">Start</button>
    <button id="sendButton">Send</button>
    <button id="closeButton">Stop</button>
  </div>

  <script src="https://webrtc.github.io/adapter/adapter-latest.js"></script>
  <script src="js/main.js"></script>

</body>

</html>

Update your JavaScript

Replace main.js with the contents of step-03/js/main.js.

Try out streaming data between peers: open index.html, press Start to set up the peer connection, enter some text in the textarea on the left, then click Send to transfer the text using WebRTC data channels.

How it works

This code uses RTCPeerConnection and RTCDataChannel to enable exchange of text messages.

Much of the code in this step is the same as for the RTCPeerConnection example.

The sendData() and createConnection() functions have most of the new code:

function createConnection() {
  dataChannelSend.placeholder = '';
  var servers = null;
  pcConstraint = null;
  dataConstraint = null;
  trace('Using SCTP based data channels');
  // For SCTP, reliable and ordered delivery is true by default.
  // Add localConnection to global scope to make it visible
  // from the browser console.
  window.localConnection = localConnection =
      new RTCPeerConnection(servers, pcConstraint);
  trace('Created local peer connection object localConnection');

  sendChannel = localConnection.createDataChannel('sendDataChannel',
      dataConstraint);
  trace('Created send data channel');

  localConnection.onicecandidate = iceCallback1;
  sendChannel.onopen = onSendChannelStateChange;
  sendChannel.onclose = onSendChannelStateChange;

  // Add remoteConnection to global scope to make it visible
  // from the browser console.
  window.remoteConnection = remoteConnection =
      new RTCPeerConnection(servers, pcConstraint);
  trace('Created remote peer connection object remoteConnection');

  remoteConnection.onicecandidate = iceCallback2;
  remoteConnection.ondatachannel = receiveChannelCallback;

  localConnection.createOffer().then(
    gotDescription1,
    onCreateSessionDescriptionError
  );
  startButton.disabled = true;
  closeButton.disabled = false;
}

function sendData() {
  var data = dataChannelSend.value;
  sendChannel.send(data);
  trace('Sent Data: ' + data);
}

The syntax of RTCDataChannel is deliberately similar to WebSocket, with a send() method and a message event.

Notice the use of dataConstraint. Data channels can be configured to enable different types of data sharing — for example, prioritizing reliable delivery over performance. You can find out more information about options at Mozilla Developer Network.

Bonus points

  1. With SCTP, the protocol used by WebRTC data channels, reliable and ordered data delivery is on by default. When might RTCDataChannel need to provide reliable delivery of data, and when might performance be more important — even if that means losing some data?
  2. Use CSS to improve page layout, and add a placeholder attribute to the "dataChannelReceive" textarea.
  3. Test the page on a mobile device.

What you learned

In this step you learned how to:

  • Establish a connection between two WebRTC peers.
  • Exchange text data between the peers.

A complete version of this step is in the step-03 folder.

Find out more

Next up

You've learned how to exchange data between peers on the same page, but how do you do this between different machines? First, you need to set up a signaling channel to exchange metadata messages. Find out how in the next step!

7. Set up a signaling service to exchange messages

What you'll learn

In this step, you'll find out how to:

  • Use npm to install project dependencies as specified in package.json
  • Run a Node.js server and use node-static to serve static files.
  • Set up a messaging service on Node.js using Socket.IO.
  • Use that to create ‘rooms' and exchange messages.

A complete version of this step is in the step-04 folder.

Concepts

In order to set up and maintain a WebRTC call, WebRTC clients (peers) need to exchange metadata:

  • Candidate (network) information.
  • Offer and answer messages providing information about media, such as resolution and codecs.

In other words, an exchange of metadata is required before peer-to-peer streaming of audio, video, or data can take place. This process is called signaling.

In the previous steps, the sender and receiver RTCPeerConnection objects are on the same page, so ‘signaling' is simply a matter of passing metadata between objects.

In a real world application, the sender and receiver RTCPeerConnections run in web pages on different devices, and you need a way for them to communicate metadata.

For this, you use a signaling server: a server that can pass messages between WebRTC clients (peers). The actual messages are plain text: stringified JavaScript objects.

Prerequisite: Install Node.js

In order to run the next steps of this codelab (folders step-04 to step-06) you will need to run a server on localhost using Node.js.

You can download and install Node.js from this link or via your preferred package manager.

Once installed, you will be able to import the dependencies required for the next steps (running npm install), as well as running a small localhost server to execute the codelab (running node index.js). These commands will be indicated later, when they are required.

About the app

WebRTC uses a client-side JavaScript API, but for real-world usage also requires a signaling (messaging) server, as well as STUN and TURN servers. You can find out more here.

In this step you'll build a simple Node.js signaling server, using the Socket.IO Node.js module and JavaScript library for messaging. Experience with Node.js and Socket.IO will be useful, but not crucial; the messaging components are very simple.

In this example, the server (the Node.js application) is implemented in index.js, and the client that runs on it (the web app) is implemented in index.html.

The Node.js application in this step has two tasks.

First, it acts as a message relay:

socket.on('message', function (message) {
  log('Got message: ', message);
  socket.broadcast.emit('message', message);
});

Second, it manages WebRTC video chat ‘rooms':

if (numClients === 0) {
  socket.join(room);
  socket.emit('created', room, socket.id);
} else if (numClients === 1) {
  socket.join(room);
  socket.emit('joined', room, socket.id);
  io.sockets.in(room).emit('ready');
} else { // max two clients
  socket.emit('full', room);
}

Our simple WebRTC application will permit a maximum of two peers to share a room.

HTML & JavaScript

Update index.html so it looks like this:

<!DOCTYPE html>
<html>

<head>

  <title>Realtime communication with WebRTC</title>

  <link rel="stylesheet" href="css/main.css" />

</head>

<body>

  <h1>Realtime communication with WebRTC</h1>

  <script src="/socket.io/socket.io.js"></script>
  <script src="js/main.js"></script>
  
</body>

</html>

You won't see anything on the page in this step: all logging is done to the browser console. (To view the console in Chrome, press Ctrl-Shift-J, or Command-Option-J if you're on a Mac.)

Replace js/main.js with the following:

'use strict';

var isInitiator;

window.room = prompt("Enter room name:");

var socket = io.connect();

if (room !== "") {
  console.log('Message from client: Asking to join room ' + room);
  socket.emit('create or join', room);
}

socket.on('created', function(room, clientId) {
  isInitiator = true;
});

socket.on('full', function(room) {
  console.log('Message from client: Room ' + room + ' is full :^(');
});

socket.on('ipaddr', function(ipaddr) {
  console.log('Message from client: Server IP address is ' + ipaddr);
});

socket.on('joined', function(room, clientId) {
  isInitiator = false;
});

socket.on('log', function(array) {
  console.log.apply(console, array);
});

Set up Socket.IO to run on Node.js

In the HTML file, you may have seen that you are using a Socket.IO file:

<script src="/socket.io/socket.io.js"></script>

At the top level of your work directory create a file named package.json with the following contents:

{
  "name": "webrtc-codelab",
  "version": "0.0.1",
  "description": "WebRTC codelab",
  "dependencies": {
    "node-static": "^0.7.10",
    "socket.io": "^1.2.0"
  }
}

This is an app manifest that tells Node Package Manager (npm) what project dependencies to install.

To install dependencies (such as /socket.io/socket.io.js), run the following from the command line terminal, in your work directory:

npm install

You should see an installation log that ends something like this:

3ab06b7bcc7664b9.png

As you can see, npm has installed the dependencies defined in package.json.

Create a new file index.js at the top level of your work directory (not in the js directory) and add the following code:

'use strict';

var os = require('os');
var nodeStatic = require('node-static');
var http = require('http');
var socketIO = require('socket.io');

var fileServer = new(nodeStatic.Server)();
var app = http.createServer(function(req, res) {
  fileServer.serve(req, res);
}).listen(8080);

var io = socketIO.listen(app);
io.sockets.on('connection', function(socket) {

  // convenience function to log server messages on the client
  function log() {
    var array = ['Message from server:'];
    array.push.apply(array, arguments);
    socket.emit('log', array);
  }

  socket.on('message', function(message) {
    log('Client said: ', message);
    // for a real app, would be room-only (not broadcast)
    socket.broadcast.emit('message', message);
  });

  socket.on('create or join', function(room) {
    log('Received request to create or join room ' + room);

    var clientsInRoom = io.sockets.adapter.rooms[room];
    var numClients = clientsInRoom ? Object.keys(clientsInRoom.sockets).length : 0;

    log('Room ' + room + ' now has ' + numClients + ' client(s)');

    if (numClients === 0) {
      socket.join(room);
      log('Client ID ' + socket.id + ' created room ' + room);
      socket.emit('created', room, socket.id);

    } else if (numClients === 1) {
      log('Client ID ' + socket.id + ' joined room ' + room);
      io.sockets.in(room).emit('join', room);
      socket.join(room);
      socket.emit('joined', room, socket.id);
      io.sockets.in(room).emit('ready');
    } else { // max two clients
      socket.emit('full', room);
    }
  });

  socket.on('ipaddr', function() {
    var ifaces = os.networkInterfaces();
    for (var dev in ifaces) {
      ifaces[dev].forEach(function(details) {
        if (details.family === 'IPv4' && details.address !== '127.0.0.1') {
          socket.emit('ipaddr', details.address);
        }
      });
    }
  });

});

From the command line terminal, run the following command in the work directory:

node index.js

From your browser, open localhost:8080.

Each time you open this URL, you will be prompted to enter a room name. To join the same room, choose the same room name each time, such as ‘foo'.

Open a new tab page, and open localhost:8080 again. Choose the same room name.

Open localhost:8080 in a third tab or window. Choose the same room name again.

Check the console in each of the tabs: you should see the logging from the JavaScript above.

Bonus points

  1. What alternative messaging mechanisms might be possible? What problems might you encounter using ‘pure' WebSocket?
  2. What issues might be involved with scaling this application? Can you develop a method for testing thousands or millions of simultaneous room requests?
  3. This app uses a JavaScript prompt to get a room name. Work out a way to get the room name from the URL. For example localhost:8080/foo would give the room name foo.

What you learned

In this step, you learned how to:

  • Use npm to install project dependencies as specified in package.json
  • Run a Node.js server to server static files.
  • Set up a messaging service on Node.js using socket.io.
  • Use that to create ‘rooms' and exchange messages.

A complete version of this step is in the step-04 folder.

Find out more

Next up

Find out how to use signaling to enable two users to make a peer connection.

8. Combine peer connection and signaling

What you'll learn

In this step you'll find out how to:

  • Run a WebRTC signaling service using Socket.IO running on Node.js
  • Use that service to exchange WebRTC metadata between peers.

A complete version of this step is in the step-05 folder.

Replace HTML and JavaScript

Replace the contents of index.html with the following:

<!DOCTYPE html>
<html>

<head>

  <title>Realtime communication with WebRTC</title>

  <link rel="stylesheet" href="/css/main.css" />

</head>

<body>

  <h1>Realtime communication with WebRTC</h1>

  <div id="videos">
    <video id="localVideo" autoplay muted></video>
    <video id="remoteVideo" autoplay></video>
  </div>

  <script src="/socket.io/socket.io.js"></script>
  <script src="https://webrtc.github.io/adapter/adapter-latest.js"></script>
  <script src="js/main.js"></script>
  
</body>

</html>

Replace js/main.js with the contents of step-05/js/main.js.

Run the Node.js server

If you are not following this codelab from your work directory, you may need to install the dependencies for the step-05 folder or your current working folder. Run the following command from your working directory:

npm install

Once installed, if your Node.js server is not running, start it by calling the following command in the work directory:

node index.js

Make sure you're using the version of index.js from the previous step that implements Socket.IO. For more information on Node and Socket IO, review the section "Set up a signaling service to exchange messages".

From your browser, open localhost:8080.

Open localhost:8080 again, in a new tab or window. One video element will display the local stream from getUserMedia()and the other will show the ‘remote' video streamed via RTCPeerconnection.

View logging in the browser console.

Bonus points

  1. This application supports only one-to-one video chat. How might you change the design to enable more than one person to share the same video chat room?
  2. The example has the room name foo hard coded. What would be the best way to enable other room names?
  3. How would users share the room name? Try to build an alternative to sharing room names.
  4. How could you change the app

What you learned

In this step you learned how to:

  • Run a WebRTC signaling service using Socket.IO running on Node.js.
  • Use that service to exchange WebRTC metadata between peers.

A complete version of this step is in the step-05 folder.

Tips

  • WebRTC stats and debug data are available from chrome://webrtc-internals.
  • test.webrtc.org can be used to check your local environment and test your camera and microphone.
  • If you have odd troubles with caching, try the following:
  • Do a hard refresh by holding down ctrl and clicking the Reload button
  • Restart the browser
  • Run npm cache clean from the command line.

Next up

Find out how to take a photo, get the image data, and share that between remote peers.

9. Take a photo and share it via a data channel

What you'll learn

In this step you'll learn how to:

  • Take a photo and get the data from it using the canvas element.
  • Exchange image data with a remote user.

A complete version of this step is in the step-06 folder.

How it works

Previously you learned how to exchange text messages using RTCDataChannel.

This step makes it possible to share entire files: in this example, photos captured via getUserMedia().

The core parts of this step are as follows:

  1. Establish a data channel. Note that you don't add any media streams to the peer connection in this step.
  2. Capture the user's webcam video stream with getUserMedia():
var video = document.getElementById('video');

function grabWebCamVideo() {
  console.log('Getting user media (video) ...');
  navigator.mediaDevices.getUserMedia({
    video: true
  })
  .then(gotStream)
  .catch(function(e) {
    alert('getUserMedia() error: ' + e.name);
  });
}
  1. When the user clicks the Snap button, get a snapshot (a video frame) from the video stream and display it in a canvas element:
var photo = document.getElementById('photo');
var photoContext = photo.getContext('2d');

function snapPhoto() {
  photoContext.drawImage(video, 0, 0, photo.width, photo.height);
  show(photo, sendBtn);
}
  1. When the user clicks the Send button, convert the image to bytes and send them via a data channel:
function sendPhoto() {
  // Split data channel message in chunks of this byte length.
  var CHUNK_LEN = 64000;
  var img = photoContext.getImageData(0, 0, photoContextW, photoContextH),
    len = img.data.byteLength,
    n = len / CHUNK_LEN | 0;

  console.log('Sending a total of ' + len + ' byte(s)');
  dataChannel.send(len);

  // split the photo and send in chunks of about 64KB
  for (var i = 0; i < n; i++) {
    var start = i * CHUNK_LEN,
      end = (i + 1) * CHUNK_LEN;
    console.log(start + ' - ' + (end - 1));
    dataChannel.send(img.data.subarray(start, end));
  }

  // send the reminder, if any
  if (len % CHUNK_LEN) {
    console.log('last ' + len % CHUNK_LEN + ' byte(s)');
    dataChannel.send(img.data.subarray(n * CHUNK_LEN));
  }
}
  1. The receiving side converts data channel message bytes back to an image and displays the image to the user:
function receiveDataChromeFactory() {
  var buf, count;

  return function onmessage(event) {
    if (typeof event.data === 'string') {
      buf = window.buf = new Uint8ClampedArray(parseInt(event.data));
      count = 0;
      console.log('Expecting a total of ' + buf.byteLength + ' bytes');
      return;
    }

    var data = new Uint8ClampedArray(event.data);
    buf.set(data, count);

    count += data.byteLength;
    console.log('count: ' + count);

    if (count === buf.byteLength) {
      // we're done: all data chunks have been received
      console.log('Done. Rendering photo.');
      renderPhoto(buf);
    }
  };
}

function renderPhoto(data) {
  var canvas = document.createElement('canvas');
  canvas.width = photoContextW;
  canvas.height = photoContextH;
  canvas.classList.add('incomingPhoto');
  // trail is the element holding the incoming images
  trail.insertBefore(canvas, trail.firstChild);

  var context = canvas.getContext('2d');
  var img = context.createImageData(photoContextW, photoContextH);
  img.data.set(data);
  context.putImageData(img, 0, 0);
}

Get the code

Replace the contents of your work folder with the contents of step-06. Your index.html file in work should now look like this**:**

<!DOCTYPE html>
<html>

<head>

  <title>Realtime communication with WebRTC</title>

  <link rel="stylesheet" href="/css/main.css" />

</head>

<body>

  <h1>Realtime communication with WebRTC</h1>

  <h2>
    <span>Room URL: </span><span id="url">...</span>
  </h2>

  <div id="videoCanvas">
    <video id="camera" autoplay></video>
    <canvas id="photo"></canvas>
  </div>

  <div id="buttons">
    <button id="snap">Snap</button><span> then </span><button id="send">Send</button>
    <span> or </span>
    <button id="snapAndSend">Snap &amp; Send</button>
  </div>

  <div id="incoming">
    <h2>Incoming photos</h2>
    <div id="trail"></div>
  </div>

  <script src="/socket.io/socket.io.js"></script>
  <script src="https://webrtc.github.io/adapter/adapter-latest.js"></script>
  <script src="js/main.js"></script>

</body>

</html>

If you are not following this codelab from your work directory, you may need to install the dependencies for the step-06 folder or your current working folder. Simply run the following command from your working directory:

npm install

Once installed, if your Node.js server is not running, start it by calling the following command from your work directory:

node index.js

Make sure you're using the version of index.js that implements Socket.IO, and remember to restart your Node.js server if you make changes. For more information on Node and Socket IO, review the section "Set up a signaling service to exchange messages".

If necessary, click on the Allow button to allow the app to use your webcam.

The app will create a random room ID and add that ID to the URL. Open the URL from the address bar in a new browser tab or window.

Click the Snap & Send button and then look at the Incoming area in the other tab at the bottom of the page. The app transfers photos between tabs.

You should see something like this:

911b40f36ba6ba8.png

Bonus points

  1. How can you change the code to make it possible to share any file type?

Find out more

What you learned

  • How to take a photo and get the data from it using the canvas element.
  • How to exchange that data with a remote user.

A complete version of this step is in the step-06 folder.

10. Congratulations

You built an app to do realtime video streaming and data exchange!

What you learned

In this codelab you learned how to:

  • Get video from your webcam.
  • Stream video with RTCPeerConnection.
  • Stream data with RTCDataChannel.
  • Set up a signaling service to exchange messages.
  • Combine peer connection and signaling.
  • Take a photo and share it via a data channel.

Next steps

Learn more

  • A range of resources for getting started with WebRTC are available from webrtc.org.