如何使用 Cloud Run 工作與用於處理影片的 Video Intelligence API

1. 簡介

總覽

在本程式碼研究室中,您將建立以 Node.js 編寫的 Cloud Run 工作,為影片中的每個場景提供視覺說明。首先,您的工作會使用 Video Intelligence API 來偵測場景變化的時間戳記。接下來,您的工作將使用名為 ffmpeg 的第三方二進位檔,擷取每個場景變更時間戳記的螢幕截圖。最後,Vertex AI 視覺說明文字用於提供螢幕截圖的視覺化說明。

本程式碼研究室也會示範如何在 Cloud Run 工作中使用 ffmpeg,以擷取特定時間戳記的影片圖片。由於 ffmpeg 必須獨立安裝,本程式碼研究室將說明如何建立 Dockerfile,以便將 ffmpeg 做為 Cloud Run 工作的一部分安裝。

下圖說明 Cloud Run 工作的運作方式:

Cloud Run 工作影片說明插圖

課程內容

  • 如何使用 Dockerfile 建立容器映像檔,以安裝第三方二進位檔
  • 如何為 Cloud Run 工作建立服務帳戶來呼叫其他 Google Cloud 服務,藉此遵循最低權限原則
  • 如何透過 Cloud Run 工作使用 Video Intelligence 用戶端程式庫
  • 如何呼叫 Google API,取得 Vertex AI 中每個情境的視覺說明

2. 設定和需求

必要條件

啟用 Cloud Shell

  1. 在 Cloud 控制台中,按一下「啟用 Cloud Shell」圖示 d1264ca30785e435.png

cb81e7c8e34bc8d.png

如果您是第一次啟動 Cloud Shell,系統會顯示中繼畫面,說明這項服務的內容。如果系統顯示中繼畫面,請按一下「繼續」

d95252b003979716.png

佈建並連線至 Cloud Shell 只需幾分鐘的時間。

7833d5e1c5d18f54.png

這個虛擬機器已載入所有必要的開發工具。提供永久的 5 GB 主目錄,而且在 Google Cloud 中運作,大幅提高網路效能和驗證能力。在本程式碼研究室中,您的大部分作業都可透過瀏覽器完成。

連線至 Cloud Shell 後,您應會發現自己通過驗證,且專案已設為您的專案 ID。

  1. 在 Cloud Shell 中執行下列指令,確認您已通過驗證:
gcloud auth list

指令輸出

 Credentialed Accounts
ACTIVE  ACCOUNT
*       <my_account>@<my_domain.com>

To set the active account, run:
    $ gcloud config set account `ACCOUNT`
  1. 在 Cloud Shell 中執行下列指令,確認 gcloud 指令知道您的專案:
gcloud config list project

指令輸出

[core]
project = <PROJECT_ID>

如果尚未設定,請使用下列指令進行設定:

gcloud config set project <PROJECT_ID>

指令輸出

Updated property [core/project].

3. 啟用 API 並設定環境變數

開始使用本程式碼研究室之前,您必須先啟用多個 API。本程式碼研究室需要使用下列 API。您可以執行下列指令來啟用這些 API:

gcloud services enable run.googleapis.com \
    storage.googleapis.com \
    cloudbuild.googleapis.com \
    videointelligence.googleapis.com \
    aiplatform.googleapis.com

然後設定將在本程式碼研究室中使用的環境變數。

REGION=<YOUR-REGION>
PROJECT_ID=<YOUR-PROJECT-ID>
PROJECT_NUMBER=$(gcloud projects describe $PROJECT_ID --format='value(projectNumber)')
JOB_NAME=video-describer-job
BUCKET_ID=$PROJECT_ID-video-describer
SERVICE_ACCOUNT="cloud-run-job-video"
SERVICE_ACCOUNT_ADDRESS=$SERVICE_ACCOUNT@$PROJECT_ID.iam.gserviceaccount.com

4. 建立服務帳戶

您將為 Cloud Run 工作建立服務帳戶,以便存取 Cloud Storage、Vertex AI 和 Video Intelligence API。

首先,請建立服務帳戶。

gcloud iam service-accounts create $SERVICE_ACCOUNT \
  --display-name="Cloud Run Video Scene Image Describer service account"

接著,將 Cloud Storage 值區和 Vertex AI API 的存取權授予服務帳戶。

# to view & download storage bucket objects
gcloud projects add-iam-policy-binding $PROJECT_ID \
  --member serviceAccount:$SERVICE_ACCOUNT_ADDRESS \
  --role=roles/storage.objectViewer

# to call the Vertex AI imagetext model
gcloud projects add-iam-policy-binding $PROJECT_ID \
  --member serviceAccount:$SERVICE_ACCOUNT_ADDRESS \
  --role=roles/aiplatform.user

5. 建立 Cloud Storage 值區

使用下列指令建立 Cloud Storage 值區,以便上傳影片供 Cloud Run 工作處理:

gsutil mb -l us-central1 gs://$BUCKET_ID/

[選用] 您可以將這部範例影片下載至本機。

gsutil cp gs://cloud-samples-data/video/visionapi.mp4 testvideo.mp4

現在,請將影片檔案上傳至儲存空間值區。

FILENAME=<YOUR-VIDEO-FILENAME>
gsutil cp $FILENAME gs://$BUCKET_ID

6. 建立 Cloud Run 工作

首先,建立原始碼的目錄,並以 cd 指向該目錄。

mkdir video-describer-job && cd $_

接著,建立含有以下內容的 package.json 檔案:

{
  "name": "video-describer-job",
  "version": "1.0.0",
  "private": true,
  "description": "describes the image in every scene for a given video",
  "main": "app.js",
  "author": "Google LLC",
  "license": "Apache-2.0",
  "scripts": {
    "start": "node app.js"
  },
  "dependencies": {
    "@google-cloud/storage": "^7.7.0",
    "@google-cloud/video-intelligence": "^5.0.1",
    "axios": "^1.6.2",
    "fluent-ffmpeg": "^2.1.2",
    "google-auth-library": "^9.4.1"
  }
}

這個應用程式包含多個來源檔案,以提高可讀性。首先,請建立含有以下內容的 app.js 來源檔案。此檔案包含工作的進入點,且包含應用程式的主要邏輯。

const bucketName = "<YOUR_BUCKET_ID>";
const videoFilename = "<YOUR-VIDEO-FILENAME>";

const { captureImages } = require("./helpers/imageCapture.js");
const { detectSceneChanges } = require("./helpers/sceneDetector.js");
const { getImageCaption } = require("./helpers/imageCaptioning.js");
const storageHelper = require("./helpers/storage.js");
const authHelper = require("./helpers/auth.js");

const fs = require("fs").promises;
const path = require("path");

const main = async () => {

    try {

        // download the file to locally to the Cloud Run Job instance
        let localFilename = await storageHelper.downloadVideoFile(
            bucketName,
            videoFilename
        );

        // PART 1 - Use Video Intelligence API
        // detect all the scenes in the video & save timestamps to an array

        // EXAMPLE OUTPUT
        // Detected scene changes at the following timestamps:
        // [1, 7, 11, 12]
        let timestamps = await detectSceneChanges(localFilename);
        console.log(
            "Detected scene changes at the following timestamps: ",
            timestamps
        );

        // PART 2 - Use ffmpeg via dockerfile install
        // create an image of each scene change
        // and save to a local directory called "output"
        // returns the base filename for the generated images

        // EXAMPLE OUTPUT
        // creating screenshot for scene:  1 at output/video-filename-1.png
        // creating screenshot for scene:  7 at output/video-filename-7.png
        // creating screenshot for scene:  11 at output/video-filename-11.png
        // creating screenshot for scene:  12 at output/video-filename-12.png
        // returns the base filename for the generated images
        let imageBaseName = await captureImages(localFilename, timestamps);

        // PART 3a - get Access Token to call Vertex AI APIs via REST
        // needed for the image captioning
        // since we're calling the Vertex AI APIs directly
        let accessToken = await authHelper.getAccessToken();
        console.log("got an access token");

        // PART 3b - use Image Captioning to describe each scene per screenshot
        // EXAMPLE OUTPUT
        /*
        [
            {
                timestamp: 1,
                description:
                    "an aerial view of a city with a bridge in the background"
            },
            {
                timestamp: 7,
                description:
                    "a man in a blue shirt sits in front of shelves of donuts"
            },
            {
                timestamp: 11,
                description:
                    "a black and white photo of people working in a bakery"
            },
            {
                timestamp: 12,
                description:
                    "a black and white photo of a man and woman working in a bakery"
            }
        ]; */

        // instantiate the data structure for storing the scene description and timestamp
        // e.g. an array of json objects,
        // [{ timestamp: 5, description: "..." }, ...]
        let scenes = [];

        // for each timestamp, send the image to Vertex AI
        console.log("getting Vertex AI description for each timestamps");
        scenes = await Promise.all(
            timestamps.map(async (timestamp) => {
                let filepath = path.join(
                    "./output",
                    imageBaseName + "-" + timestamp + ".png"
                );

                // get the base64 encoded image bc sending via REST
                const encodedFile = await fs.readFile(filepath, "base64");

                // send each screenshot to Vertex AI for description
                let description = await getImageCaption(
                    accessToken,
                    encodedFile
                );

                return { timestamp: timestamp, description: description };
            })
        );

        console.log("finished collecting all the scenes");
        console.log(scenes);
    } catch (error) {
        //return an error
        console.error("received error: ", error);
    }
};

// Start script
main().catch((err) => {
    console.error(err);
});

接下來,建立 Dockerfile

# Copyright 2020 Google, LLC.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Use the official lightweight Node.js image.
# https://hub.docker.com/_/node
FROM node:20.10.0-slim

# Create and change to the app directory.
WORKDIR /usr/src/app

RUN apt-get update && apt-get install -y ffmpeg

# Copy application dependency manifests to the container image.
# A wildcard is used to ensure both package.json AND package-lock.json are copied.
# Copying this separately prevents re-running npm install on every code change.
COPY package*.json ./

# Install dependencies.
# If you add a package-lock.json speed your build by switching to 'npm ci'.
# RUN npm ci --only=production
RUN npm install --production

# Copy local code to the container image.
COPY . .

# Run the job on container startup.
CMD [ "npm", "start" ]

接著建立名為 .dockerignore 的檔案,忽略特定檔案容器化。

Dockerfile
.dockerignore
node_modules
npm-debug.log

接著建立名為 helpers 的資料夾。這個資料夾將包含 5 個輔助檔案。

mkdir helpers
cd helpers

接著,建立含有以下內容的 sceneDetector.js 檔案。這個檔案使用 Video Intelligence API 來偵測影片中的場景變化。

const fs = require("fs");
const util = require("util");
const readFile = util.promisify(fs.readFile);
const ffmpeg = require("fluent-ffmpeg");

const Video = require("@google-cloud/video-intelligence");
const client = new Video.VideoIntelligenceServiceClient();

module.exports = {
    detectSceneChanges: async function (downloadedFile) {
        // Reads a local video file and converts it to base64
        const file = await readFile(downloadedFile);
        const inputContent = file.toString("base64");

        // setup request for shot change detection
        const videoContext = {
            speechTranscriptionConfig: {
                languageCode: "en-US",
                enableAutomaticPunctuation: true
            }
        };

        const request = {
            inputContent: inputContent,
            features: ["SHOT_CHANGE_DETECTION"]
        };

        // Detects camera shot changes
        const [operation] = await client.annotateVideo(request);
        console.log("Shot (scene) detection in progress...");
        const [operationResult] = await operation.promise();

        // Gets shot changes
        const shotChanges =
            operationResult.annotationResults[0].shotAnnotations;

        console.log(
            "Shot (scene) changes detected: " + shotChanges.length
        );

        // data structure to be returned
        let sceneChanges = [];

        // for the initial scene
        sceneChanges.push(1);

        // if only one scene, keep at 1 second
        if (shotChanges.length === 1) {
            return sceneChanges;
        }

        // get length of video
        const videoLength = await getVideoLength(downloadedFile);

        shotChanges.forEach((shot, shotIndex) => {
            if (shot.endTimeOffset === undefined) {
                shot.endTimeOffset = {};
            }
            if (shot.endTimeOffset.seconds === undefined) {
                shot.endTimeOffset.seconds = 0;
            }
            if (shot.endTimeOffset.nanos === undefined) {
                shot.endTimeOffset.nanos = 0;
            }

            // convert to a number
            let currentTimestampSecond = Number(
                shot.endTimeOffset.seconds
            );

            let sceneChangeTime = 0;
            // double-check no scenes were detected within the last second
            if (currentTimestampSecond + 1 > videoLength) {
                sceneChangeTime = currentTimestampSecond;
            } else {
                // otherwise, for simplicity, just round up to the next second
                sceneChangeTime = currentTimestampSecond + 1;
            }

            sceneChanges.push(sceneChangeTime);
        });

        return sceneChanges;
    }
};

async function getVideoLength(localFile) {
    let getLength = util.promisify(ffmpeg.ffprobe);
    let length = await getLength(localFile);

    console.log("video length: ", length.format.duration);
    return length.format.duration;
}

現在建立名為 imageCapture.js 的檔案,其中含有以下內容。這個檔案會使用節點套件 fluent-ffmpeg,從節點應用程式內執行 ffmpeg 指令。

const ffmpeg = require("fluent-ffmpeg");
const path = require("path");
const util = require("util");

module.exports = {
    captureImages: async function (localFile, scenes) {
        let imageBaseName = path.parse(localFile).name;

        try {
            for (scene of scenes) {
                console.log("creating screenshot for scene: ", +scene);
                await createScreenshot(localFile, imageBaseName, scene);
            }
        } catch (error) {
            console.log("error gathering screenshots: ", error);
        }

        console.log("finished gathering the screenshots");
        return imageBaseName; // return the base filename for each image
    }
};

async function createScreenshot(localFile, imageBaseName, scene) {
    return new Promise((resolve, reject) => {
        ffmpeg(localFile)
            .screenshots({
                timestamps: [scene],
                filename: `${imageBaseName}-${scene}.png`,
                folder: "output",
                size: "320x240"
            })
            .on("error", () => {
                console.log(
                    "Failed to create scene for timestamp: " + scene
                );
                return reject(
                    "Failed to create scene for timestamp: " + scene
                );
            })
            .on("end", () => {
                return resolve();
            });
    });
}

最後,建立名為 imageCaptioning.js 的檔案,其中含有以下內容。這個檔案使用 Vertex AI 取得每張場景圖片的影像說明。

const axios = require("axios");
const { GoogleAuth } = require("google-auth-library");

const auth = new GoogleAuth({
    scopes: "https://www.googleapis.com/auth/cloud-platform"
});

module.exports = {
    getImageCaption: async function (token, encodedFile) {
        // this example shows you how to call the Vertex REST APIs directly
        // https://cloud.google.com/vertex-ai/generative-ai/docs/image/image-captioning#get-captions-short
        // https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/image-captioning

        let projectId = await auth.getProjectId();

        let config = {
            headers: {
                "Authorization": "Bearer " + token,
                "Content-Type": "application/json; charset=utf-8"
            }
        };

        const json = {
            "instances": [
                {
                    "image": {
                        "bytesBase64Encoded": encodedFile
                    }
                }
            ],
            "parameters": {
                "sampleCount": 1,
                "language": "en"
            }
        };

        let response = await axios.post(
            "https://us-central1-aiplatform.googleapis.com/v1/projects/" +
                projectId +
                "/locations/us-central1/publishers/google/models/imagetext:predict",
            json,
            config
        );

        return response.data.predictions[0];
    }
};

建立名為 auth.js 的檔案。這個檔案會使用 Google 驗證用戶端程式庫,取得直接呼叫 Vertex AI 端點所需的存取權杖。

const { GoogleAuth } = require("google-auth-library");

const auth = new GoogleAuth({
    scopes: "https://www.googleapis.com/auth/cloud-platform"
});

module.exports = {
    getAccessToken: async function () {
        return await auth.getAccessToken();
    }
};

最後建立名為 storage.js 的檔案。這個檔案會使用 Cloud Storage 用戶端程式庫從雲端儲存空間下載影片。

const { Storage } = require("@google-cloud/storage");

module.exports = {
    downloadVideoFile: async function (bucketName, videoFilename) {
        // Creates a client
        const storage = new Storage();

        // keep same name locally
        let localFilename = videoFilename;

        const options = {
            destination: localFilename
        };

        // Download the file
        await storage
            .bucket(bucketName)
            .file(videoFilename)
            .download(options);

        console.log(
            `gs://${bucketName}/${videoFilename} downloaded locally to ${localFilename}.`
        );

        return localFilename;
    }
};

7. 部署及執行 Cloud Run 工作

首先,請確認您位於程式碼研究室的根目錄 video-describer-job

cd .. && pwd

接著,您就能使用這個指令部署 Cloud Run 工作。

gcloud run jobs deploy $JOB_NAME  --source . --region $REGION

接著,您可以執行下列指令,執行 Cloud Run 工作:

gcloud run jobs execute $JOB_NAME

工作執行完畢後,您可以執行下列指令,取得記錄 URI 的連結。您也可以使用 Cloud 控制台,直接前往 Cloud Run 工作查看記錄檔。

gcloud run jobs executions describe <JOB_EXECUTION_ID>

您應該會在記錄檔中看到下列輸出內容:

[{ timestamp: 1, description: 'what is google cloud vision api ? is written on a white background .'},
{ timestamp: 3, description: 'a woman wearing a google cloud vision api shirt sits at a table'},
{ timestamp: 18, description: 'a person holding a cell phone with the words what is cloud vision api on the bottom' }, ...]

8. 恭喜!

恭喜您完成本程式碼研究室!

建議您詳閱 Video Intelligence APICloud RunVertex AI 視覺字幕的說明文件。

涵蓋內容

  • 如何使用 Dockerfile 建立容器映像檔,以安裝第三方二進位檔
  • 如何為 Cloud Run 工作建立服務帳戶來呼叫其他 Google Cloud 服務,藉此遵循最低權限原則
  • 如何透過 Cloud Run 工作使用 Video Intelligence 用戶端程式庫
  • 如何呼叫 Google API,取得 Vertex AI 中每個情境的視覺說明

9. 清除所用資源

為避免產生意外費用 (舉例來說,如果不小心叫用這項 Cloud Run 工作的次數超過免費方案的每月 Cloud Run 叫用分配數量),您可以刪除 Cloud Run 工作或刪除步驟 2 中建立的專案。

如要刪除 Cloud Run 工作,請前往 Cloud Run Cloud 控制台 (https://console.cloud.google.com/run/),然後刪除 video-describer-job 函式 (若您使用其他名稱,則會看到 $JOB_NAME)。

如果選擇刪除整個專案,您可以前往 https://console.cloud.google.com/cloud-resource-manager,選取您在步驟 2 建立的專案,然後選擇「刪除」。如果刪除專案,您必須變更 Cloud SDK 中的專案。您可以執行 gcloud projects list 來查看可用專案的清單。