Create a video scene-by-scene image description service using Cloud Run, Video Intelligence API, and Vertex AI

1. Introduction


In this codelab, you'll create a Cloud Run service written in Node.js that provides a visual description of every scene in a video. First, your service will use the Video Intelligence API to detect the timestamps for whenever a scene changes. Next, your service will use a 3rd party binary called ffmpeg to capture a screenshot for each scene-change timestamp. Lastly, Vertex AI visual captioning is used to provide a visual description of the screenshots.

This codelab also demonstrates how to use ffmpeg within your Cloud Run service to capture images from a video at a given timestamp. Since ffmpeg needs to be installed independently, this codelab shows you how to create a Dockerfile to install ffmpeg as a part of your Cloud Run service.

Here is an illustration of how the Cloud Run service works:

Cloud Run Video Description Service diagram

What you'll learn

  • How to create a container image using a Dockerfile to install a 3rd party binary
  • How to follow the principle of least privilege by creating a service account for the Cloud Run service to call other Google Cloud services
  • How to use the Video Intelligence client library from a Cloud Run service
  • How to make a call to Google APIs to get the visual description of each scene from Vertex AI

2. Setup and Requirements


Activate Cloud Shell

  1. From the Cloud Console, click Activate Cloud Shell d1264ca30785e435.png.


If this is your first time starting Cloud Shell, you're presented with an intermediate screen describing what it is. If you were presented with an intermediate screen, click Continue.


It should only take a few moments to provision and connect to Cloud Shell.


This virtual machine is loaded with all the development tools needed. It offers a persistent 5 GB home directory and runs in Google Cloud, greatly enhancing network performance and authentication. Much, if not all, of your work in this codelab can be done with a browser.

Once connected to Cloud Shell, you should see that you are authenticated and that the project is set to your project ID.

  1. Run the following command in Cloud Shell to confirm that you are authenticated:
gcloud auth list

Command output

 Credentialed Accounts
*       <my_account>@<>

To set the active account, run:
    $ gcloud config set account `ACCOUNT`
  1. Run the following command in Cloud Shell to confirm that the gcloud command knows about your project:
gcloud config list project

Command output

project = <PROJECT_ID>

If it is not, you can set it with this command:

gcloud config set project <PROJECT_ID>

Command output

Updated property [core/project].

3. Enable APIs and Set Environment Variables

Before you can start using this codelab, there are several APIs you will need to enable. This codelab requires using the following APIs. You can enable those APIs by running the following command:

gcloud services enable \ \ \ \

Then you can set environment variables that will be used throughout this codelab.


PROJECT_NUMBER=$(gcloud projects describe $PROJECT_ID --format='value(projectNumber)')
export BUCKET_ID=$PROJECT_ID-video-describer

4. Create a Cloud Storage bucket

Create a Cloud Storage bucket where you can upload videos for processing by the Cloud Run service with the following command:

gsutil mb -l us-central1 gs://$BUCKET_ID/

[Optional] You can use this sample video by downloading it locally.

gsutil cp gs://cloud-samples-data/video/visionapi.mp4 testvideo.mp4

Now upload your video file to your storage bucket.

gsutil cp $FILENAME gs://$BUCKET_ID

5. Create the Node.js app

First, create a directory for the source code and cd into that directory.

mkdir video-describer && cd $_

Then, create a package.json file with the following content:

  "name": "video-describer",
  "version": "1.0.0",
  "private": true,
  "description": "describes the image in every scene for a given video",
  "main": "index.js",
  "author": "Google LLC",
  "license": "Apache-2.0",
  "scripts": {
    "start": "node index.js"
  "dependencies": {
    "@google-cloud/storage": "^7.7.0",
    "@google-cloud/video-intelligence": "^5.0.1",
    "axios": "^1.6.2",
    "express": "^4.18.2",
    "fluent-ffmpeg": "^2.1.2",
    "google-auth-library": "^9.4.1"

This app consists of several source files for improved readability. First, create an index.js source file with the content below. This file contains the entry point for the service and contains the main logic for the app.

const { captureImages } = require('./imageCapture.js');
const { detectSceneChanges } = require('./sceneDetector.js');
const transcribeScene = require('./imageDescriber.js');
const { Storage } = require('@google-cloud/storage');
const fs = require('fs').promises;
const path = require('path');
const express = require('express');
const app = express();

const bucketName = process.env.BUCKET_ID;

const port = parseInt(process.env.PORT) || 8080;
app.listen(port, () => {
  console.log(`video describer service ready: listening on port ${port}`);

// entry point for the service
app.get('/', async (req, res) => {

  try {

    // download the requested video from Cloud Storage
    let videoFilename =  req.query.filename; 
    console.log("processing file: " + videoFilename);

    // download the file to locally to the Cloud Run instance
    let localFilename = await downloadVideoFile(videoFilename);

    // detect all the scenes in the video & save timestamps to an array
    let timestamps = await detectSceneChanges(localFilename);
    console.log("Detected scene changes at the following timestamps: ", timestamps);

    // create an image of each scene change
    // and save to a local directory called "output"
    await captureImages(localFilename, timestamps);

    // get an access token for the Service Account to call the Google APIs 
    let accessToken = await transcribeScene.getAccessToken();
    console.log("got an access token");

    let imageBaseName = path.parse(localFilename).name;

    // the data structure for storing the scene description and timestamp
    // e.g. an array of json objects {timestamp: 1, description: "..."}, etc.    
    let scenes = []

    // for each timestamp, send the image to Vertex AI
    console.log("getting Vertex AI description all the timestamps");
    scenes = await Promise.all( (timestamp) => {

        let filepath = path.join("./output", imageBaseName + "-" + timestamp + ".png");

        // get the base64 encoded image
        const encodedFile = await fs.readFile(filepath, 'base64');

        // send each screenshot to Vertex AI for description
        let description = await transcribeScene.transcribeScene(accessToken, encodedFile)

        return { timestamp: timestamp, description: description };

    console.log("finished collecting all the scenes");

    return res.json(scenes);

  } catch (error) {

    //return an error
    console.log("received error: ", error);
    return res.status(500).json("an internal error occurred");


async function downloadVideoFile(videoFilename) {
  // Creates a client
  const storage = new Storage();

  // keep same name locally
  let localFilename = videoFilename;

  const options = {
    destination: localFilename

  // Download the file
  await storage.bucket(bucketName).file(videoFilename).download(options);

    `gs://${bucketName}/${videoFilename} downloaded locally to ${localFilename}.`

  return localFilename;

Next, create a sceneDetector.js file with the following content. This file uses the Video Intelligence API to detect when scenes change in the video.

const fs = require('fs');
const util = require('util');
const readFile = util.promisify(fs.readFile);
const ffmpeg = require('fluent-ffmpeg');

const Video = require('@google-cloud/video-intelligence');
const client = new Video.VideoIntelligenceServiceClient();

module.exports = {
    detectSceneChanges: async function (downloadedFile) {

        // Reads a local video file and converts it to base64       
        const file = await readFile(downloadedFile);
        const inputContent = file.toString('base64');

        // setup request for shot change detection
        const videoContext = {
            speechTranscriptionConfig: {
                languageCode: 'en-US',
                enableAutomaticPunctuation: true,

        const request = {
            inputContent: inputContent,
            features: ['SHOT_CHANGE_DETECTION'],

        // Detects camera shot changes
        const [operation] = await client.annotateVideo(request);
        console.log('Shot (scene) detection in progress...');
        const [operationResult] = await operation.promise();

        // Gets shot changes
        const shotChanges = operationResult.annotationResults[0].shotAnnotations;

        console.log("Shot (scene) changes detected: " + shotChanges.length);

        // data structure to be returned 
        let sceneChanges = [];

        // for the initial scene

        // if only one scene, keep at 1 second
        if (shotChanges.length === 1) {
            return sceneChanges;

        // get length of video
        const videoLength = await getVideoLength(downloadedFile);

        shotChanges.forEach((shot, shotIndex) => {
            if (shot.endTimeOffset === undefined) {
                shot.endTimeOffset = {};
            if (shot.endTimeOffset.seconds === undefined) {
                shot.endTimeOffset.seconds = 0;
            if (shot.endTimeOffset.nanos === undefined) {
                shot.endTimeOffset.nanos = 0;

            // convert to a number
            let currentTimestampSecond = Number(shot.endTimeOffset.seconds);                  

            let sceneChangeTime = 0;
            // double-check no scenes were detected within the last second
            if (currentTimestampSecond + 1 > videoLength) {
                sceneChangeTime = currentTimestampSecond;                
            } else {
                // otherwise, for simplicity, just round up to the next second 
                sceneChangeTime = currentTimestampSecond + 1;


        return sceneChanges;

async function getVideoLength(localFile) {
    let getLength = util.promisify(ffmpeg.ffprobe);
    let length = await getLength(localFile);

    console.log("video length: ", length.format.duration);
    return length.format.duration;

Now create a file called imageCapture.js with the following content. This file uses the node package fluent-ffmpeg to run ffmpeg commands from within a node app.

const ffmpeg = require('fluent-ffmpeg');
const path = require('path');
const util = require('util');

module.exports = {
    captureImages: async function (localFile, scenes) {

        let imageBaseName = path.parse(localFile).name;

        try {
            for (scene of scenes) {
                console.log("creating screenshot for scene: ", + scene);
                await createScreenshot(localFile, imageBaseName, scene);

        } catch (error) {
            console.log("error gathering screenshots: ", error);

        console.log("finished gathering the screenshots");

async function createScreenshot(localFile, imageBaseName, scene) {
    return new Promise((resolve, reject) => {
                timestamps: [scene],
                filename: `${imageBaseName}-${scene}.png`,
                folder: 'output',
                size: '320x240'
            }).on("error", () => {
                console.log("Failed to create scene for timestamp: " + scene);
                return reject('Failed to create scene for timestamp: ' + scene);
            .on("end", () => {
                return resolve();

Lastly, create a file called `imageDescriber.js`` with the following content. This file uses Vertex AI to get a visual description of each scene image.

const axios = require("axios");
const { GoogleAuth } = require('google-auth-library');

const auth = new GoogleAuth({
    scopes: ''

module.exports = {
    getAccessToken: async function () {

        return await auth.getAccessToken();

    transcribeScene: async function(token, encodedFile) {

        let projectId = await auth.getProjectId();
        let config = {
            headers: {
                'Authorization': 'Bearer ' + token,
                'Content-Type': 'application/json; charset=utf-8'

        const json = {
            "instances": [
                    "image": {
                        "bytesBase64Encoded": encodedFile
            "parameters": {
                "sampleCount": 1,
                "language": "en"

        let response = await'' + projectId + '/locations/us-central1/publishers/google/models/imagetext:predict', json, config);


Create a Dockerfile and a .dockerignore file

Since this service uses ffmpeg, you'll need to create a Dockerfile that installs ffmpeg.

Create a file called Dockerfile that contains the following content:

# Copyright 2020 Google, LLC.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# See the License for the specific language governing permissions and
# limitations under the License.

# Use the official lightweight Node.js image.
FROM node:20.10.0-slim

# Create and change to the app directory.
WORKDIR /usr/src/app

RUN apt-get update && apt-get install -y ffmpeg

# Copy application dependency manifests to the container image.
# A wildcard is used to ensure both package.json AND package-lock.json are copied.
# Copying this separately prevents re-running npm install on every code change.
COPY package*.json ./

# Install dependencies.
# If you add a package-lock.json speed your build by switching to 'npm ci'.
# RUN npm ci --only=production
RUN npm install --production

# Copy local code to the container image.
COPY . .

# Run the web service on container startup.
CMD [ "npm", "start" ]

And create a file called .dockerignore to ignore containerizing certain files.


6. Create a Service Account

You will create a service account for the Cloud Run service to use to access Cloud Storage, Vertex AI, and the Video Intelligence API.


gcloud iam service-accounts create $SERVICE_ACCOUNT \
  --display-name="Cloud Run Video Scene Image Describer service account"
# to view & download storage bucket objects
gcloud projects add-iam-policy-binding $PROJECT_ID \
  --member serviceAccount:$SERVICE_ACCOUNT_ADDRESS \

# to call the Vertex AI imagetext model
gcloud projects add-iam-policy-binding $PROJECT_ID \
  --member serviceAccount:$SERVICE_ACCOUNT_ADDRESS \

7. Deploy the Cloud Run service

Now you can use a source-based deployment to automatically containerize your Cloud Run service.

Note: the default processing time for a Cloud Run service is 60 seconds. This codelab uses a 5 minute timeout because the suggested test video is 2 minutes long. You may need to modify the time if you are using a video that has a longer duration.

gcloud run deploy $SERVICE_NAME \
  --region=$REGION \
  --set-env-vars BUCKET_ID=$BUCKET_ID \
  --no-allow-unauthenticated \
  --service-account $SERVICE_ACCOUNT_ADDRESS \
  --timeout=5m \

Once deployed, save the service url in an environment variable.

SERVICE_URL=$(gcloud run services describe $SERVICE_NAME --platform managed --region $REGION --format 'value(status.url)')

8. Call the Cloud Run service

Now you can call your service by providing the name of the video you uploaded to Cloud Storage.

curl -X GET -H "Authorization: Bearer $(gcloud auth print-identity-token)" ${SERVICE_URL}?filename=${FILENAME}

Your results should look similar to the example output below:

[{"timestamp":1,"description":"an aerial view of a city with a bridge in the background"},{"timestamp":7,"description":"a man in a blue shirt sits in front of shelves of donuts"},{"timestamp":11,"description":"a black and white photo of people working in a bakery"},{"timestamp":12,"description":"a black and white photo of a man and woman working in a bakery"}]

9. Congratulations!

Congratulations for completing the codelab!

We recommend reviewing the documentation on Video Intelligence API, Cloud Run, and Vertex AI visual captioning.

What we've covered

  • How to create a container image using a Dockerfile to install a 3rd party binary
  • How to follow the principle of least privilege by creating a service account for the Cloud Run service to call other Google Cloud services
  • How to use the Video Intelligence client library from a Cloud Run service
  • How to make a call to Google APIs to get the visual description of each scene from Vertex AI

10. Clean up

To avoid inadvertent charges, (for example, if this Cloud Run service is inadvertently invoked more times than your monthly Cloud Run invokement allocation in the free tier), you can either delete the Cloud Run service or delete the project you created in Step 2.

To delete the Cloud Run service, go to the Cloud Run Cloud Console at and delete the video-describer function (or the $SERVICE_NAME in case you used a different name).

If you choose to delete the entire project, you can go to, select the project you created in Step 2, and choose Delete. If you delete the project, you'll need to change projects in your Cloud SDK. You can view the list of all available projects by running gcloud projects list.