1. Introduction
Overview
In this codelab, you'll create a Cloud Run Job written in Node.js that provides a visual description of every scene in a video. First, your job will use the Video Intelligence API to detect the timestamps for whenever a scene changes. Next, your job will use a 3rd party binary called ffmpeg to capture a screenshot for each scene-change timestamp. Lastly, Vertex AI visual captioning is used to provide a visual description of the screenshots.
This codelab also demonstrates how to use ffmpeg within your Cloud Run Job to capture images from a video at a given timestamp. Since ffmpeg needs to be installed independently, this codelab shows you how to create a Dockerfile to install ffmpeg as a part of your Cloud Run Job.
Here is an illustration of how the Cloud Run Job works:
What you'll learn
- How to create a container image using a Dockerfile to install a 3rd party binary
- How to follow the principle of least privilege by creating a service account for the Cloud Run Job to call other Google Cloud services
- How to use the Video Intelligence client library from a Cloud Run Job
- How to make a call to Google APIs to get the visual description of each scene from Vertex AI
2. Setup and Requirements
Prerequisites
- You are logged into the Cloud Console.
- You have previously deployed a Cloud Run service. For example, you can follow the deploy a web service from source code quickstart to get started.
Activate Cloud Shell
- From the Cloud Console, click Activate Cloud Shell .
If this is your first time starting Cloud Shell, you're presented with an intermediate screen describing what it is. If you were presented with an intermediate screen, click Continue.
It should only take a few moments to provision and connect to Cloud Shell.
This virtual machine is loaded with all the development tools needed. It offers a persistent 5 GB home directory and runs in Google Cloud, greatly enhancing network performance and authentication. Much, if not all, of your work in this codelab can be done with a browser.
Once connected to Cloud Shell, you should see that you are authenticated and that the project is set to your project ID.
- Run the following command in Cloud Shell to confirm that you are authenticated:
gcloud auth list
Command output
Credentialed Accounts ACTIVE ACCOUNT * <my_account>@<my_domain.com> To set the active account, run: $ gcloud config set account `ACCOUNT`
- Run the following command in Cloud Shell to confirm that the gcloud command knows about your project:
gcloud config list project
Command output
[core] project = <PROJECT_ID>
If it is not, you can set it with this command:
gcloud config set project <PROJECT_ID>
Command output
Updated property [core/project].
3. Enable APIs and Set Environment Variables
Before you can start using this codelab, there are several APIs you will need to enable. This codelab requires using the following APIs. You can enable those APIs by running the following command:
gcloud services enable run.googleapis.com \ storage.googleapis.com \ cloudbuild.googleapis.com \ videointelligence.googleapis.com \ aiplatform.googleapis.com
Then you can set environment variables that will be used throughout this codelab.
REGION=<YOUR-REGION> PROJECT_ID=<YOUR-PROJECT-ID> PROJECT_NUMBER=$(gcloud projects describe $PROJECT_ID --format='value(projectNumber)') JOB_NAME=video-describer-job BUCKET_ID=$PROJECT_ID-video-describer SERVICE_ACCOUNT="cloud-run-job-video" SERVICE_ACCOUNT_ADDRESS=$SERVICE_ACCOUNT@$PROJECT_ID.iam.gserviceaccount.com
4. Create a Service Account
You will create a service account for the Cloud Run Job to use to access Cloud Storage, Vertex AI, and the Video Intelligence API.
First, create the service account.
gcloud iam service-accounts create $SERVICE_ACCOUNT \ --display-name="Cloud Run Video Scene Image Describer service account"
Then grant the service account access to the Cloud Storage bucket and Vertex AI APIs.
# to view & download storage bucket objects gcloud projects add-iam-policy-binding $PROJECT_ID \ --member serviceAccount:$SERVICE_ACCOUNT_ADDRESS \ --role=roles/storage.objectViewer # to call the Vertex AI imagetext model gcloud projects add-iam-policy-binding $PROJECT_ID \ --member serviceAccount:$SERVICE_ACCOUNT_ADDRESS \ --role=roles/aiplatform.user
5. Create a Cloud Storage bucket
Create a Cloud Storage bucket where you can upload videos for processing by the Cloud Run Job with the following command:
gsutil mb -l us-central1 gs://$BUCKET_ID/
[Optional] You can use this sample video by downloading it locally.
gsutil cp gs://cloud-samples-data/video/visionapi.mp4 testvideo.mp4
Now upload your video file to your storage bucket.
FILENAME=<YOUR-VIDEO-FILENAME> gsutil cp $FILENAME gs://$BUCKET_ID
6. Create the Cloud Run Job
First, create a directory for the source code and cd into that directory.
mkdir video-describer-job && cd $_
Then, create a package.json
file with the following content:
{ "name": "video-describer-job", "version": "1.0.0", "private": true, "description": "describes the image in every scene for a given video", "main": "app.js", "author": "Google LLC", "license": "Apache-2.0", "scripts": { "start": "node app.js" }, "dependencies": { "@google-cloud/storage": "^7.7.0", "@google-cloud/video-intelligence": "^5.0.1", "axios": "^1.6.2", "fluent-ffmpeg": "^2.1.2", "google-auth-library": "^9.4.1" } }
This app consists of several source files for improved readability. First, create an app.js
source file with the content below. This file contains the entry point for the job and contains the main logic for the app.
const bucketName = "<YOUR_BUCKET_ID>"; const videoFilename = "<YOUR-VIDEO-FILENAME>"; const { captureImages } = require("./helpers/imageCapture.js"); const { detectSceneChanges } = require("./helpers/sceneDetector.js"); const { getImageCaption } = require("./helpers/imageCaptioning.js"); const storageHelper = require("./helpers/storage.js"); const authHelper = require("./helpers/auth.js"); const fs = require("fs").promises; const path = require("path"); const main = async () => { try { // download the file to locally to the Cloud Run Job instance let localFilename = await storageHelper.downloadVideoFile( bucketName, videoFilename ); // PART 1 - Use Video Intelligence API // detect all the scenes in the video & save timestamps to an array // EXAMPLE OUTPUT // Detected scene changes at the following timestamps: // [1, 7, 11, 12] let timestamps = await detectSceneChanges(localFilename); console.log( "Detected scene changes at the following timestamps: ", timestamps ); // PART 2 - Use ffmpeg via dockerfile install // create an image of each scene change // and save to a local directory called "output" // returns the base filename for the generated images // EXAMPLE OUTPUT // creating screenshot for scene: 1 at output/video-filename-1.png // creating screenshot for scene: 7 at output/video-filename-7.png // creating screenshot for scene: 11 at output/video-filename-11.png // creating screenshot for scene: 12 at output/video-filename-12.png // returns the base filename for the generated images let imageBaseName = await captureImages(localFilename, timestamps); // PART 3a - get Access Token to call Vertex AI APIs via REST // needed for the image captioning // since we're calling the Vertex AI APIs directly let accessToken = await authHelper.getAccessToken(); console.log("got an access token"); // PART 3b - use Image Captioning to describe each scene per screenshot // EXAMPLE OUTPUT /* [ { timestamp: 1, description: "an aerial view of a city with a bridge in the background" }, { timestamp: 7, description: "a man in a blue shirt sits in front of shelves of donuts" }, { timestamp: 11, description: "a black and white photo of people working in a bakery" }, { timestamp: 12, description: "a black and white photo of a man and woman working in a bakery" } ]; */ // instantiate the data structure for storing the scene description and timestamp // e.g. an array of json objects, // [{ timestamp: 5, description: "..." }, ...] let scenes = []; // for each timestamp, send the image to Vertex AI console.log("getting Vertex AI description for each timestamps"); scenes = await Promise.all( timestamps.map(async (timestamp) => { let filepath = path.join( "./output", imageBaseName + "-" + timestamp + ".png" ); // get the base64 encoded image bc sending via REST const encodedFile = await fs.readFile(filepath, "base64"); // send each screenshot to Vertex AI for description let description = await getImageCaption( accessToken, encodedFile ); return { timestamp: timestamp, description: description }; }) ); console.log("finished collecting all the scenes"); console.log(scenes); } catch (error) { //return an error console.error("received error: ", error); } }; // Start script main().catch((err) => { console.error(err); });
Next, create the Dockerfile
.
# Copyright 2020 Google, LLC. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Use the official lightweight Node.js image. # https://hub.docker.com/_/node FROM node:20.10.0-slim # Create and change to the app directory. WORKDIR /usr/src/app RUN apt-get update && apt-get install -y ffmpeg # Copy application dependency manifests to the container image. # A wildcard is used to ensure both package.json AND package-lock.json are copied. # Copying this separately prevents re-running npm install on every code change. COPY package*.json ./ # Install dependencies. # If you add a package-lock.json speed your build by switching to 'npm ci'. # RUN npm ci --only=production RUN npm install --production # Copy local code to the container image. COPY . . # Run the job on container startup. CMD [ "npm", "start" ]
And create a file called .dockerignore
to ignore containerizing certain files.
Dockerfile .dockerignore node_modules npm-debug.log
Now create a folder called helpers
. This folder will contain 5 helper files.
mkdir helpers cd helpers
Next, create a sceneDetector.js
file with the following content. This file uses the Video Intelligence API to detect when scenes change in the video.
const fs = require("fs"); const util = require("util"); const readFile = util.promisify(fs.readFile); const ffmpeg = require("fluent-ffmpeg"); const Video = require("@google-cloud/video-intelligence"); const client = new Video.VideoIntelligenceServiceClient(); module.exports = { detectSceneChanges: async function (downloadedFile) { // Reads a local video file and converts it to base64 const file = await readFile(downloadedFile); const inputContent = file.toString("base64"); // setup request for shot change detection const videoContext = { speechTranscriptionConfig: { languageCode: "en-US", enableAutomaticPunctuation: true } }; const request = { inputContent: inputContent, features: ["SHOT_CHANGE_DETECTION"] }; // Detects camera shot changes const [operation] = await client.annotateVideo(request); console.log("Shot (scene) detection in progress..."); const [operationResult] = await operation.promise(); // Gets shot changes const shotChanges = operationResult.annotationResults[0].shotAnnotations; console.log( "Shot (scene) changes detected: " + shotChanges.length ); // data structure to be returned let sceneChanges = []; // for the initial scene sceneChanges.push(1); // if only one scene, keep at 1 second if (shotChanges.length === 1) { return sceneChanges; } // get length of video const videoLength = await getVideoLength(downloadedFile); shotChanges.forEach((shot, shotIndex) => { if (shot.endTimeOffset === undefined) { shot.endTimeOffset = {}; } if (shot.endTimeOffset.seconds === undefined) { shot.endTimeOffset.seconds = 0; } if (shot.endTimeOffset.nanos === undefined) { shot.endTimeOffset.nanos = 0; } // convert to a number let currentTimestampSecond = Number( shot.endTimeOffset.seconds ); let sceneChangeTime = 0; // double-check no scenes were detected within the last second if (currentTimestampSecond + 1 > videoLength) { sceneChangeTime = currentTimestampSecond; } else { // otherwise, for simplicity, just round up to the next second sceneChangeTime = currentTimestampSecond + 1; } sceneChanges.push(sceneChangeTime); }); return sceneChanges; } }; async function getVideoLength(localFile) { let getLength = util.promisify(ffmpeg.ffprobe); let length = await getLength(localFile); console.log("video length: ", length.format.duration); return length.format.duration; }
Now create a file called imageCapture.js
with the following content. This file uses the node package fluent-ffmpeg to run ffmpeg commands from within a node app.
const ffmpeg = require("fluent-ffmpeg"); const path = require("path"); const util = require("util"); module.exports = { captureImages: async function (localFile, scenes) { let imageBaseName = path.parse(localFile).name; try { for (scene of scenes) { console.log("creating screenshot for scene: ", +scene); await createScreenshot(localFile, imageBaseName, scene); } } catch (error) { console.log("error gathering screenshots: ", error); } console.log("finished gathering the screenshots"); return imageBaseName; // return the base filename for each image } }; async function createScreenshot(localFile, imageBaseName, scene) { return new Promise((resolve, reject) => { ffmpeg(localFile) .screenshots({ timestamps: [scene], filename: `${imageBaseName}-${scene}.png`, folder: "output", size: "320x240" }) .on("error", () => { console.log( "Failed to create scene for timestamp: " + scene ); return reject( "Failed to create scene for timestamp: " + scene ); }) .on("end", () => { return resolve(); }); }); }
Lastly, create a file called imageCaptioning.js
with the following content. This file uses Vertex AI to get a visual description of each scene image.
const axios = require("axios"); const { GoogleAuth } = require("google-auth-library"); const auth = new GoogleAuth({ scopes: "https://www.googleapis.com/auth/cloud-platform" }); module.exports = { getImageCaption: async function (token, encodedFile) { // this example shows you how to call the Vertex REST APIs directly // https://cloud.google.com/vertex-ai/generative-ai/docs/image/image-captioning#get-captions-short // https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/image-captioning let projectId = await auth.getProjectId(); let config = { headers: { "Authorization": "Bearer " + token, "Content-Type": "application/json; charset=utf-8" } }; const json = { "instances": [ { "image": { "bytesBase64Encoded": encodedFile } } ], "parameters": { "sampleCount": 1, "language": "en" } }; let response = await axios.post( "https://us-central1-aiplatform.googleapis.com/v1/projects/" + projectId + "/locations/us-central1/publishers/google/models/imagetext:predict", json, config ); return response.data.predictions[0]; } };
Create a file called auth.js
. This file will use the Google authentication client library to get an access token needed to call the Vertex AI endpoints directly.
const { GoogleAuth } = require("google-auth-library"); const auth = new GoogleAuth({ scopes: "https://www.googleapis.com/auth/cloud-platform" }); module.exports = { getAccessToken: async function () { return await auth.getAccessToken(); } };
Lastly, create a file called storage.js
. This file will use the Cloud Storage client libraries to download a video from cloud storage.
const { Storage } = require("@google-cloud/storage"); module.exports = { downloadVideoFile: async function (bucketName, videoFilename) { // Creates a client const storage = new Storage(); // keep same name locally let localFilename = videoFilename; const options = { destination: localFilename }; // Download the file await storage .bucket(bucketName) .file(videoFilename) .download(options); console.log( `gs://${bucketName}/${videoFilename} downloaded locally to ${localFilename}.` ); return localFilename; } };
7. Deploy and Execute the Cloud Run Job
First, make sure you are in the root directory video-describer-job
for your codelab.
cd .. && pwd
Then, you can use this command to deploy the Cloud Run Job.
gcloud run jobs deploy $JOB_NAME --source . --region $REGION
Now, you can then execute the Cloud Run Job by running the following command:
gcloud run jobs execute $JOB_NAME
Once the job has finished executing, you can run the following command to get a link to the log URI. (Or you can use the Cloud Console and go to Cloud Run Jobs directly to see the logs.)
gcloud run jobs executions describe <JOB_EXECUTION_ID>
You should see the following output in the logs:
[{ timestamp: 1, description: 'what is google cloud vision api ? is written on a white background .'}, { timestamp: 3, description: 'a woman wearing a google cloud vision api shirt sits at a table'}, { timestamp: 18, description: 'a person holding a cell phone with the words what is cloud vision api on the bottom' }, ...]
8. Congratulations!
Congratulations for completing the codelab!
We recommend reviewing the documentation on Video Intelligence API, Cloud Run, and Vertex AI visual captioning.
What we've covered
- How to create a container image using a Dockerfile to install a 3rd party binary
- How to follow the principle of least privilege by creating a service account for the Cloud Run Job to call other Google Cloud services
- How to use the Video Intelligence client library from a Cloud Run Job
- How to make a call to Google APIs to get the visual description of each scene from Vertex AI
9. Clean up
To avoid inadvertent charges, (for example, if this Cloud Run job is inadvertently invoked more times than your monthly Cloud Run invokement allocation in the free tier), you can either delete the Cloud Run job or delete the project you created in Step 2.
To delete the Cloud Run job, go to the Cloud Run Cloud Console at https://console.cloud.google.com/run/ and delete the video-describer-job
function (or the $JOB_NAME in case you used a different name).
If you choose to delete the entire project, you can go to https://console.cloud.google.com/cloud-resource-manager, select the project you created in Step 2, and choose Delete. If you delete the project, you'll need to change projects in your Cloud SDK. You can view the list of all available projects by running gcloud projects list
.