1. Introduction
Overview
In this codelab, you'll create a Cloud Run service written in Node.js that provides a visual description of every scene in a video. First, your service will use the Video Intelligence API to detect the timestamps for whenever a scene changes. Next, your service will use a 3rd party binary called ffmpeg to capture a screenshot for each scene-change timestamp. Lastly, Vertex AI visual captioning is used to provide a visual description of the screenshots.
This codelab also demonstrates how to use ffmpeg within your Cloud Run service to capture images from a video at a given timestamp. Since ffmpeg needs to be installed independently, this codelab shows you how to create a Dockerfile to install ffmpeg as a part of your Cloud Run service.
Here is an illustration of how the Cloud Run service works:
What you'll learn
- How to create a container image using a Dockerfile to install a 3rd party binary
- How to follow the principle of least privilege by creating a service account for the Cloud Run service to call other Google Cloud services
- How to use the Video Intelligence client library from a Cloud Run service
- How to make a call to Google APIs to get the visual description of each scene from Vertex AI
2. Setup and Requirements
Prerequisites
- You are logged into the Cloud Console.
- You have previously deployed a Cloud Run service. For example, you can follow the deploy a web service from source code quickstart to get started.
Activate Cloud Shell
- From the Cloud Console, click Activate Cloud Shell .
If this is your first time starting Cloud Shell, you're presented with an intermediate screen describing what it is. If you were presented with an intermediate screen, click Continue.
It should only take a few moments to provision and connect to Cloud Shell.
This virtual machine is loaded with all the development tools needed. It offers a persistent 5 GB home directory and runs in Google Cloud, greatly enhancing network performance and authentication. Much, if not all, of your work in this codelab can be done with a browser.
Once connected to Cloud Shell, you should see that you are authenticated and that the project is set to your project ID.
- Run the following command in Cloud Shell to confirm that you are authenticated:
gcloud auth list
Command output
Credentialed Accounts ACTIVE ACCOUNT * <my_account>@<my_domain.com> To set the active account, run: $ gcloud config set account `ACCOUNT`
- Run the following command in Cloud Shell to confirm that the gcloud command knows about your project:
gcloud config list project
Command output
[core] project = <PROJECT_ID>
If it is not, you can set it with this command:
gcloud config set project <PROJECT_ID>
Command output
Updated property [core/project].
3. Enable APIs and Set Environment Variables
Before you can start using this codelab, there are several APIs you will need to enable. This codelab requires using the following APIs. You can enable those APIs by running the following command:
gcloud services enable run.googleapis.com \ storage.googleapis.com \ cloudbuild.googleapis.com \ videointelligence.googleapis.com \ aiplatform.googleapis.com
Then you can set environment variables that will be used throughout this codelab.
REGION=<YOUR-REGION> PROJECT_ID=<YOUR-PROJECT-ID> PROJECT_NUMBER=$(gcloud projects describe $PROJECT_ID --format='value(projectNumber)') SERVICE_NAME=video-describer export BUCKET_ID=$PROJECT_ID-video-describer
4. Create a Cloud Storage bucket
Create a Cloud Storage bucket where you can upload videos for processing by the Cloud Run service with the following command:
gsutil mb -l us-central1 gs://$BUCKET_ID/
[Optional] You can use this sample video by downloading it locally.
gsutil cp gs://cloud-samples-data/video/visionapi.mp4 testvideo.mp4
Now upload your video file to your storage bucket.
FILENAME=<YOUR-VIDEO-FILENAME> gsutil cp $FILENAME gs://$BUCKET_ID
5. Create the Node.js app
First, create a directory for the source code and cd into that directory.
mkdir video-describer && cd $_
Then, create a package.json file with the following content:
{ "name": "video-describer", "version": "1.0.0", "private": true, "description": "describes the image in every scene for a given video", "main": "index.js", "author": "Google LLC", "license": "Apache-2.0", "scripts": { "start": "node index.js" }, "dependencies": { "@google-cloud/storage": "^7.7.0", "@google-cloud/video-intelligence": "^5.0.1", "axios": "^1.6.2", "express": "^4.18.2", "fluent-ffmpeg": "^2.1.2", "google-auth-library": "^9.4.1" } }
This app consists of several source files for improved readability. First, create an index.js source file with the content below. This file contains the entry point for the service and contains the main logic for the app.
const { captureImages } = require('./imageCapture.js'); const { detectSceneChanges } = require('./sceneDetector.js'); const transcribeScene = require('./imageDescriber.js'); const { Storage } = require('@google-cloud/storage'); const fs = require('fs').promises; const path = require('path'); const express = require('express'); const app = express(); const bucketName = process.env.BUCKET_ID; const port = parseInt(process.env.PORT) || 8080; app.listen(port, () => { console.log(`video describer service ready: listening on port ${port}`); }); // entry point for the service app.get('/', async (req, res) => { try { // download the requested video from Cloud Storage let videoFilename = req.query.filename; console.log("processing file: " + videoFilename); // download the file to locally to the Cloud Run instance let localFilename = await downloadVideoFile(videoFilename); // detect all the scenes in the video & save timestamps to an array let timestamps = await detectSceneChanges(localFilename); console.log("Detected scene changes at the following timestamps: ", timestamps); // create an image of each scene change // and save to a local directory called "output" await captureImages(localFilename, timestamps); // get an access token for the Service Account to call the Google APIs let accessToken = await transcribeScene.getAccessToken(); console.log("got an access token"); let imageBaseName = path.parse(localFilename).name; // the data structure for storing the scene description and timestamp // e.g. an array of json objects {timestamp: 1, description: "..."}, etc. let scenes = [] // for each timestamp, send the image to Vertex AI console.log("getting Vertex AI description all the timestamps"); scenes = await Promise.all( timestamps.map(async (timestamp) => { let filepath = path.join("./output", imageBaseName + "-" + timestamp + ".png"); // get the base64 encoded image const encodedFile = await fs.readFile(filepath, 'base64'); // send each screenshot to Vertex AI for description let description = await transcribeScene.transcribeScene(accessToken, encodedFile) return { timestamp: timestamp, description: description }; })); console.log("finished collecting all the scenes"); //console.log(scenes); return res.json(scenes); } catch (error) { //return an error console.log("received error: ", error); return res.status(500).json("an internal error occurred"); } }); async function downloadVideoFile(videoFilename) { // Creates a client const storage = new Storage(); // keep same name locally let localFilename = videoFilename; const options = { destination: localFilename }; // Download the file await storage.bucket(bucketName).file(videoFilename).download(options); console.log( `gs://${bucketName}/${videoFilename} downloaded locally to ${localFilename}.` ); return localFilename; }
Next, create a sceneDetector.js file with the following content. This file uses the Video Intelligence API to detect when scenes change in the video.
const fs = require('fs'); const util = require('util'); const readFile = util.promisify(fs.readFile); const ffmpeg = require('fluent-ffmpeg'); const Video = require('@google-cloud/video-intelligence'); const client = new Video.VideoIntelligenceServiceClient(); module.exports = { detectSceneChanges: async function (downloadedFile) { // Reads a local video file and converts it to base64 const file = await readFile(downloadedFile); const inputContent = file.toString('base64'); // setup request for shot change detection const videoContext = { speechTranscriptionConfig: { languageCode: 'en-US', enableAutomaticPunctuation: true, }, }; const request = { inputContent: inputContent, features: ['SHOT_CHANGE_DETECTION'], }; // Detects camera shot changes const [operation] = await client.annotateVideo(request); console.log('Shot (scene) detection in progress...'); const [operationResult] = await operation.promise(); // Gets shot changes const shotChanges = operationResult.annotationResults[0].shotAnnotations; console.log("Shot (scene) changes detected: " + shotChanges.length); // data structure to be returned let sceneChanges = []; // for the initial scene sceneChanges.push(1); // if only one scene, keep at 1 second if (shotChanges.length === 1) { return sceneChanges; } // get length of video const videoLength = await getVideoLength(downloadedFile); shotChanges.forEach((shot, shotIndex) => { if (shot.endTimeOffset === undefined) { shot.endTimeOffset = {}; } if (shot.endTimeOffset.seconds === undefined) { shot.endTimeOffset.seconds = 0; } if (shot.endTimeOffset.nanos === undefined) { shot.endTimeOffset.nanos = 0; } // convert to a number let currentTimestampSecond = Number(shot.endTimeOffset.seconds); let sceneChangeTime = 0; // double-check no scenes were detected within the last second if (currentTimestampSecond + 1 > videoLength) { sceneChangeTime = currentTimestampSecond; } else { // otherwise, for simplicity, just round up to the next second sceneChangeTime = currentTimestampSecond + 1; } sceneChanges.push(sceneChangeTime); }); return sceneChanges; } } async function getVideoLength(localFile) { let getLength = util.promisify(ffmpeg.ffprobe); let length = await getLength(localFile); console.log("video length: ", length.format.duration); return length.format.duration; }
Now create a file called imageCapture.js with the following content. This file uses the node package fluent-ffmpeg to run ffmpeg commands from within a node app.
const ffmpeg = require('fluent-ffmpeg'); const path = require('path'); const util = require('util'); module.exports = { captureImages: async function (localFile, scenes) { let imageBaseName = path.parse(localFile).name; try { for (scene of scenes) { console.log("creating screenshot for scene: ", + scene); await createScreenshot(localFile, imageBaseName, scene); } } catch (error) { console.log("error gathering screenshots: ", error); } console.log("finished gathering the screenshots"); } } async function createScreenshot(localFile, imageBaseName, scene) { return new Promise((resolve, reject) => { ffmpeg(localFile) .screenshots({ timestamps: [scene], filename: `${imageBaseName}-${scene}.png`, folder: 'output', size: '320x240' }).on("error", () => { console.log("Failed to create scene for timestamp: " + scene); return reject('Failed to create scene for timestamp: ' + scene); }) .on("end", () => { return resolve(); }); }) }
Lastly, create a file called `imageDescriber.js`` with the following content. This file uses Vertex AI to get a visual description of each scene image.
const axios = require("axios"); const { GoogleAuth } = require('google-auth-library'); const auth = new GoogleAuth({ scopes: 'https://www.googleapis.com/auth/cloud-platform' }); module.exports = { getAccessToken: async function () { return await auth.getAccessToken(); }, transcribeScene: async function(token, encodedFile) { let projectId = await auth.getProjectId(); let config = { headers: { 'Authorization': 'Bearer ' + token, 'Content-Type': 'application/json; charset=utf-8' } } const json = { "instances": [ { "image": { "bytesBase64Encoded": encodedFile } } ], "parameters": { "sampleCount": 1, "language": "en" } } let response = await axios.post('https://us-central1-aiplatform.googleapis.com/v1/projects/' + projectId + '/locations/us-central1/publishers/google/models/imagetext:predict', json, config); return response.data.predictions[0]; } }
Create a Dockerfile and a .dockerignore file
Since this service uses ffmpeg, you'll need to create a Dockerfile that installs ffmpeg.
Create a file called Dockerfile
that contains the following content:
# Copyright 2020 Google, LLC. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Use the official lightweight Node.js image. # https://hub.docker.com/_/node FROM node:20.10.0-slim # Create and change to the app directory. WORKDIR /usr/src/app RUN apt-get update && apt-get install -y ffmpeg # Copy application dependency manifests to the container image. # A wildcard is used to ensure both package.json AND package-lock.json are copied. # Copying this separately prevents re-running npm install on every code change. COPY package*.json ./ # Install dependencies. # If you add a package-lock.json speed your build by switching to 'npm ci'. # RUN npm ci --only=production RUN npm install --production # Copy local code to the container image. COPY . . # Run the web service on container startup. CMD [ "npm", "start" ]
And create a file called .dockerignore to ignore containerizing certain files.
Dockerfile .dockerignore node_modules npm-debug.log
6. Create a Service Account
You will create a service account for the Cloud Run service to use to access Cloud Storage, Vertex AI, and the Video Intelligence API.
SERVICE_ACCOUNT="cloud-run-video-description" SERVICE_ACCOUNT_ADDRESS=$SERVICE_ACCOUNT@$PROJECT_ID.iam.gserviceaccount.com gcloud iam service-accounts create $SERVICE_ACCOUNT \ --display-name="Cloud Run Video Scene Image Describer service account" # to view & download storage bucket objects gcloud projects add-iam-policy-binding $PROJECT_ID \ --member serviceAccount:$SERVICE_ACCOUNT_ADDRESS \ --role=roles/storage.objectViewer # to call the Vertex AI imagetext model gcloud projects add-iam-policy-binding $PROJECT_ID \ --member serviceAccount:$SERVICE_ACCOUNT_ADDRESS \ --role=roles/aiplatform.user
7. Deploy the Cloud Run service
Now you can use a source-based deployment to automatically containerize your Cloud Run service.
Note: the default processing time for a Cloud Run service is 60 seconds. This codelab uses a 5 minute timeout because the suggested test video is 2 minutes long. You may need to modify the time if you are using a video that has a longer duration.
gcloud run deploy $SERVICE_NAME \ --region=$REGION \ --set-env-vars BUCKET_ID=$BUCKET_ID \ --no-allow-unauthenticated \ --service-account $SERVICE_ACCOUNT_ADDRESS \ --timeout=5m \ --source=.
Once deployed, save the service url in an environment variable.
SERVICE_URL=$(gcloud run services describe $SERVICE_NAME --platform managed --region $REGION --format 'value(status.url)')
8. Call the Cloud Run service
Now you can call your service by providing the name of the video you uploaded to Cloud Storage.
curl -X GET -H "Authorization: Bearer $(gcloud auth print-identity-token)" ${SERVICE_URL}?filename=${FILENAME}
Your results should look similar to the example output below:
[{"timestamp":1,"description":"an aerial view of a city with a bridge in the background"},{"timestamp":7,"description":"a man in a blue shirt sits in front of shelves of donuts"},{"timestamp":11,"description":"a black and white photo of people working in a bakery"},{"timestamp":12,"description":"a black and white photo of a man and woman working in a bakery"}]
9. Congratulations!
Congratulations for completing the codelab!
We recommend reviewing the documentation on Video Intelligence API, Cloud Run, and Vertex AI visual captioning.
What we've covered
- How to create a container image using a Dockerfile to install a 3rd party binary
- How to follow the principle of least privilege by creating a service account for the Cloud Run service to call other Google Cloud services
- How to use the Video Intelligence client library from a Cloud Run service
- How to make a call to Google APIs to get the visual description of each scene from Vertex AI
10. Clean up
To avoid inadvertent charges, (for example, if this Cloud Run service is inadvertently invoked more times than your monthly Cloud Run invokement allocation in the free tier), you can either delete the Cloud Run service or delete the project you created in Step 2.
To delete the Cloud Run service, go to the Cloud Run Cloud Console at https://console.cloud.google.com/run/ and delete the video-describer
function (or the $SERVICE_NAME in case you used a different name).
If you choose to delete the entire project, you can go to https://console.cloud.google.com/cloud-resource-manager, select the project you created in Step 2, and choose Delete. If you delete the project, you'll need to change projects in your Cloud SDK. You can view the list of all available projects by running gcloud projects list
.