使用 Cloud Run、Video Intelligence API 和 Vertex AI 建立每個場景的影片說明服務

1. 簡介

總覽

在本程式碼研究室中,您將建立以 Node.js 編寫的 Cloud Run 服務,這項服務會為影片中的每個場景提供視覺說明。首先,您的服務會使用 Video Intelligence API 來偵測場景變化的時間戳記。接著,您的服務會使用名為 ffmpeg 的第三方二進位檔,擷取每個場景變更時間戳記的螢幕截圖。最後,Vertex AI 視覺說明文字用於提供螢幕截圖的視覺化說明。

本程式碼研究室也會示範如何在 Cloud Run 服務中使用 ffmpeg,以擷取特定時間戳記的影片圖片。由於 ffmpeg 必須獨立安裝,本程式碼研究室將說明如何建立 Dockerfile,以便將 ffmpeg 做為 Cloud Run 服務的一部分安裝。

下圖說明 Cloud Run 服務的運作方式:

Cloud Run 影片說明服務圖表

課程內容

  • 如何使用 Dockerfile 建立容器映像檔,以安裝第三方二進位檔
  • 如何為 Cloud Run 服務建立服務帳戶來呼叫其他 Google Cloud 服務,藉此遵循最低權限原則
  • 如何透過 Cloud Run 服務使用 Video Intelligence 用戶端程式庫
  • 如何呼叫 Google API,取得 Vertex AI 中每個情境的視覺說明

2. 設定和需求

必要條件

啟用 Cloud Shell

  1. 在 Cloud 控制台中,按一下「啟用 Cloud Shell」圖示 d1264ca30785e435.png

cb81e7c8e34bc8d.png

如果您是第一次啟動 Cloud Shell,系統會顯示中繼畫面,說明這項服務的內容。如果系統顯示中繼畫面,請按一下「繼續」

d95252b003979716.png

佈建並連線至 Cloud Shell 只需幾分鐘的時間。

7833d5e1c5d18f54.png

這個虛擬機器已載入所有必要的開發工具。提供永久的 5 GB 主目錄,而且在 Google Cloud 中運作,大幅提高網路效能和驗證能力。在本程式碼研究室中,您的大部分作業都可透過瀏覽器完成。

連線至 Cloud Shell 後,您應會發現自己通過驗證,且專案已設為您的專案 ID。

  1. 在 Cloud Shell 中執行下列指令,確認您已通過驗證:
gcloud auth list

指令輸出

 Credentialed Accounts
ACTIVE  ACCOUNT
*       <my_account>@<my_domain.com>

To set the active account, run:
    $ gcloud config set account `ACCOUNT`
  1. 在 Cloud Shell 中執行下列指令,確認 gcloud 指令知道您的專案:
gcloud config list project

指令輸出

[core]
project = <PROJECT_ID>

如果尚未設定,請使用下列指令進行設定:

gcloud config set project <PROJECT_ID>

指令輸出

Updated property [core/project].

3. 啟用 API 並設定環境變數

開始使用本程式碼研究室之前,您必須先啟用多個 API。本程式碼研究室需要使用下列 API。您可以執行下列指令來啟用這些 API:

gcloud services enable run.googleapis.com \
    storage.googleapis.com \
    cloudbuild.googleapis.com \
    videointelligence.googleapis.com \
    aiplatform.googleapis.com

然後設定將在本程式碼研究室中使用的環境變數。

REGION=<YOUR-REGION>
PROJECT_ID=<YOUR-PROJECT-ID>

PROJECT_NUMBER=$(gcloud projects describe $PROJECT_ID --format='value(projectNumber)')
SERVICE_NAME=video-describer
export BUCKET_ID=$PROJECT_ID-video-describer

4. 建立 Cloud Storage 值區

使用下列指令建立 Cloud Storage 值區,以便上傳影片供 Cloud Run 服務處理:

gsutil mb -l us-central1 gs://$BUCKET_ID/

[選用] 您可以將這部範例影片下載至本機。

gsutil cp gs://cloud-samples-data/video/visionapi.mp4 testvideo.mp4

現在,請將影片檔案上傳至儲存空間值區。

FILENAME=<YOUR-VIDEO-FILENAME>
gsutil cp $FILENAME gs://$BUCKET_ID

5. 建立 Node.js 應用程式

首先,建立原始碼的目錄,並以 cd 指向該目錄。

mkdir video-describer && cd $_

接著,建立含有以下內容的 package.json 檔案:

{
  "name": "video-describer",
  "version": "1.0.0",
  "private": true,
  "description": "describes the image in every scene for a given video",
  "main": "index.js",
  "author": "Google LLC",
  "license": "Apache-2.0",
  "scripts": {
    "start": "node index.js"
  },
  "dependencies": {
    "@google-cloud/storage": "^7.7.0",
    "@google-cloud/video-intelligence": "^5.0.1",
    "axios": "^1.6.2",
    "express": "^4.18.2",
    "fluent-ffmpeg": "^2.1.2",
    "google-auth-library": "^9.4.1"
  }
}

這個應用程式包含多個來源檔案,以提高可讀性。首先,建立含有以下內容的 index.js 來源檔案。此檔案包含服務的進入點,且包含應用程式的主要邏輯。

const { captureImages } = require('./imageCapture.js');
const { detectSceneChanges } = require('./sceneDetector.js');
const transcribeScene = require('./imageDescriber.js');
const { Storage } = require('@google-cloud/storage');
const fs = require('fs').promises;
const path = require('path');
const express = require('express');
const app = express();

const bucketName = process.env.BUCKET_ID;

const port = parseInt(process.env.PORT) || 8080;
app.listen(port, () => {
  console.log(`video describer service ready: listening on port ${port}`);
});

// entry point for the service
app.get('/', async (req, res) => {

  try {

    // download the requested video from Cloud Storage
    let videoFilename =  req.query.filename; 
    console.log("processing file: " + videoFilename);

    // download the file to locally to the Cloud Run instance
    let localFilename = await downloadVideoFile(videoFilename);

    // detect all the scenes in the video & save timestamps to an array
    let timestamps = await detectSceneChanges(localFilename);
    console.log("Detected scene changes at the following timestamps: ", timestamps);

    // create an image of each scene change
    // and save to a local directory called "output"
    await captureImages(localFilename, timestamps);

    // get an access token for the Service Account to call the Google APIs 
    let accessToken = await transcribeScene.getAccessToken();
    console.log("got an access token");

    let imageBaseName = path.parse(localFilename).name;

    // the data structure for storing the scene description and timestamp
    // e.g. an array of json objects {timestamp: 1, description: "..."}, etc.    
    let scenes = []

    // for each timestamp, send the image to Vertex AI
    console.log("getting Vertex AI description all the timestamps");
    scenes = await Promise.all(
      timestamps.map(async (timestamp) => {

        let filepath = path.join("./output", imageBaseName + "-" + timestamp + ".png");

        // get the base64 encoded image
        const encodedFile = await fs.readFile(filepath, 'base64');

        // send each screenshot to Vertex AI for description
        let description = await transcribeScene.transcribeScene(accessToken, encodedFile)

        return { timestamp: timestamp, description: description };
      }));

    console.log("finished collecting all the scenes");
    //console.log(scenes);

    return res.json(scenes);

  } catch (error) {

    //return an error
    console.log("received error: ", error);
    return res.status(500).json("an internal error occurred");
  }

});

async function downloadVideoFile(videoFilename) {
  // Creates a client
  const storage = new Storage();

  // keep same name locally
  let localFilename = videoFilename;

  const options = {
    destination: localFilename
  };

  // Download the file
  await storage.bucket(bucketName).file(videoFilename).download(options);

  console.log(
    `gs://${bucketName}/${videoFilename} downloaded locally to ${localFilename}.`
  );

  return localFilename;
}

接著,建立含有以下內容的 sceneDetector.js 檔案。這個檔案使用 Video Intelligence API 來偵測影片中的場景變化。

const fs = require('fs');
const util = require('util');
const readFile = util.promisify(fs.readFile);
const ffmpeg = require('fluent-ffmpeg');

const Video = require('@google-cloud/video-intelligence');
const client = new Video.VideoIntelligenceServiceClient();

module.exports = {
    detectSceneChanges: async function (downloadedFile) {

        // Reads a local video file and converts it to base64       
        const file = await readFile(downloadedFile);
        const inputContent = file.toString('base64');

        // setup request for shot change detection
        const videoContext = {
            speechTranscriptionConfig: {
                languageCode: 'en-US',
                enableAutomaticPunctuation: true,
            },
        };

        const request = {
            inputContent: inputContent,
            features: ['SHOT_CHANGE_DETECTION'],
        };

        // Detects camera shot changes
        const [operation] = await client.annotateVideo(request);
        console.log('Shot (scene) detection in progress...');
        const [operationResult] = await operation.promise();

        // Gets shot changes
        const shotChanges = operationResult.annotationResults[0].shotAnnotations;

        console.log("Shot (scene) changes detected: " + shotChanges.length);

        // data structure to be returned 
        let sceneChanges = [];

        // for the initial scene
        sceneChanges.push(1);

        // if only one scene, keep at 1 second
        if (shotChanges.length === 1) {
            return sceneChanges;
        }

        // get length of video
        const videoLength = await getVideoLength(downloadedFile);

        shotChanges.forEach((shot, shotIndex) => {
            if (shot.endTimeOffset === undefined) {
                shot.endTimeOffset = {};
            }
            if (shot.endTimeOffset.seconds === undefined) {
                shot.endTimeOffset.seconds = 0;
            }
            if (shot.endTimeOffset.nanos === undefined) {
                shot.endTimeOffset.nanos = 0;
            }

            // convert to a number
            let currentTimestampSecond = Number(shot.endTimeOffset.seconds);                  

            let sceneChangeTime = 0;
            // double-check no scenes were detected within the last second
            if (currentTimestampSecond + 1 > videoLength) {
                sceneChangeTime = currentTimestampSecond;                
            } else {
                // otherwise, for simplicity, just round up to the next second 
                sceneChangeTime = currentTimestampSecond + 1;
            }

            sceneChanges.push(sceneChangeTime);
        });

        return sceneChanges;
    }
}

async function getVideoLength(localFile) {
    let getLength = util.promisify(ffmpeg.ffprobe);
    let length = await getLength(localFile);

    console.log("video length: ", length.format.duration);
    return length.format.duration;
}

現在,建立含有以下內容的 imageCapture.js 檔案。這個檔案會使用節點套件 fluent-ffmpeg,從節點應用程式內執行 ffmpeg 指令。

const ffmpeg = require('fluent-ffmpeg');
const path = require('path');
const util = require('util');


module.exports = {
    captureImages: async function (localFile, scenes) {


        let imageBaseName = path.parse(localFile).name;


        try {
            for (scene of scenes) {
                console.log("creating screenshot for scene: ", + scene);
                await createScreenshot(localFile, imageBaseName, scene);
            }


        } catch (error) {
            console.log("error gathering screenshots: ", error);
        }


        console.log("finished gathering the screenshots");
    }
}


async function createScreenshot(localFile, imageBaseName, scene) {
    return new Promise((resolve, reject) => {
        ffmpeg(localFile)
            .screenshots({
                timestamps: [scene],
                filename: `${imageBaseName}-${scene}.png`,
                folder: 'output',
                size: '320x240'
            }).on("error", () => {
                console.log("Failed to create scene for timestamp: " + scene);
                return reject('Failed to create scene for timestamp: ' + scene);
            })
            .on("end", () => {
                return resolve();
            });
    })
}

最後,建立含有以下內容的「imagedescriber.js」檔案。這個檔案使用 Vertex AI 取得每張場景圖片的影像說明。

const axios = require("axios");
const { GoogleAuth } = require('google-auth-library');

const auth = new GoogleAuth({
    scopes: 'https://www.googleapis.com/auth/cloud-platform'
});

module.exports = {
    getAccessToken: async function () {

        return await auth.getAccessToken();
    }, 

    transcribeScene: async function(token, encodedFile) {

        let projectId = await auth.getProjectId();
    
        let config = {
            headers: {
                'Authorization': 'Bearer ' + token,
                'Content-Type': 'application/json; charset=utf-8'
            }
        }

        const json = {
            "instances": [
                {
                    "image": {
                        "bytesBase64Encoded": encodedFile
                    }
                }
            ],
            "parameters": {
                "sampleCount": 1,
                "language": "en"
            }
        }

        let response = await axios.post('https://us-central1-aiplatform.googleapis.com/v1/projects/' + projectId + '/locations/us-central1/publishers/google/models/imagetext:predict', json, config);

        return response.data.predictions[0];
    }
}

建立 Dockerfile 和 .dockerignore 檔案

由於這項服務使用 ffmpeg,您必須建立 Dockerfile 來安裝 ffmpeg。

建立名為 Dockerfile 的檔案,其中含有下列內容:

# Copyright 2020 Google, LLC.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Use the official lightweight Node.js image.
# https://hub.docker.com/_/node
FROM node:20.10.0-slim

# Create and change to the app directory.
WORKDIR /usr/src/app

RUN apt-get update && apt-get install -y ffmpeg

# Copy application dependency manifests to the container image.
# A wildcard is used to ensure both package.json AND package-lock.json are copied.
# Copying this separately prevents re-running npm install on every code change.
COPY package*.json ./

# Install dependencies.
# If you add a package-lock.json speed your build by switching to 'npm ci'.
# RUN npm ci --only=production
RUN npm install --production

# Copy local code to the container image.
COPY . .

# Run the web service on container startup.
CMD [ "npm", "start" ]

請建立 .dockerignore 檔案,忽略將特定檔案容器化。

Dockerfile
.dockerignore
node_modules
npm-debug.log

6. 建立服務帳戶

您將建立 Cloud Run 服務的服務帳戶,以便存取 Cloud Storage、Vertex AI 和 Video Intelligence API。

SERVICE_ACCOUNT="cloud-run-video-description"
SERVICE_ACCOUNT_ADDRESS=$SERVICE_ACCOUNT@$PROJECT_ID.iam.gserviceaccount.com

gcloud iam service-accounts create $SERVICE_ACCOUNT \
  --display-name="Cloud Run Video Scene Image Describer service account"
 
# to view & download storage bucket objects
gcloud projects add-iam-policy-binding $PROJECT_ID \
  --member serviceAccount:$SERVICE_ACCOUNT_ADDRESS \
  --role=roles/storage.objectViewer

# to call the Vertex AI imagetext model
gcloud projects add-iam-policy-binding $PROJECT_ID \
  --member serviceAccount:$SERVICE_ACCOUNT_ADDRESS \
  --role=roles/aiplatform.user

7. 部署 Cloud Run 服務

您現在可以使用以來源為基礎的部署,自動將 Cloud Run 服務容器化。

注意事項:Cloud Run 服務的預設處理時間為 60 秒。本程式碼研究室會採用 5 分鐘的逾時時間,因為建議的測試影片長度為 2 分鐘。如果您使用的影片長度較長,可能需要修改時間。

gcloud run deploy $SERVICE_NAME \
  --region=$REGION \
  --set-env-vars BUCKET_ID=$BUCKET_ID \
  --no-allow-unauthenticated \
  --service-account $SERVICE_ACCOUNT_ADDRESS \
  --timeout=5m \
  --source=.

部署完成後,將服務網址儲存在環境變數中。

SERVICE_URL=$(gcloud run services describe $SERVICE_NAME --platform managed --region $REGION --format 'value(status.url)')

8. 呼叫 Cloud Run 服務

現在,您只要提供已上傳至 Cloud Storage 的影片名稱,即可呼叫您的服務。

curl -X GET -H "Authorization: Bearer $(gcloud auth print-identity-token)" ${SERVICE_URL}?filename=${FILENAME}

結果應該會與下方的範例輸出內容類似:

[{"timestamp":1,"description":"an aerial view of a city with a bridge in the background"},{"timestamp":7,"description":"a man in a blue shirt sits in front of shelves of donuts"},{"timestamp":11,"description":"a black and white photo of people working in a bakery"},{"timestamp":12,"description":"a black and white photo of a man and woman working in a bakery"}]

9. 恭喜!

恭喜您完成本程式碼研究室!

建議您詳閱 Video Intelligence APICloud RunVertex AI 視覺字幕的說明文件。

涵蓋內容

  • 如何使用 Dockerfile 建立容器映像檔,以安裝第三方二進位檔
  • 如何為 Cloud Run 服務建立服務帳戶來呼叫其他 Google Cloud 服務,藉此遵循最低權限原則
  • 如何透過 Cloud Run 服務使用 Video Intelligence 用戶端程式庫
  • 如何呼叫 Google API,取得 Vertex AI 中每個情境的視覺說明

10. 清除所用資源

為避免產生意外費用 (舉例來說,如果不小心叫用這項 Cloud Run 服務的次數超過免費方案的每月 Cloud Run 叫用分配數量),您可以刪除 Cloud Run 服務或刪除步驟 2 中建立的專案。

如要刪除 Cloud Run 服務,請前往 Cloud Run Cloud 控制台 (https://console.cloud.google.com/run/),然後刪除 video-describer 函式 (若您使用其他名稱,則會看到 $SERVICE_NAME)。

如果選擇刪除整個專案,您可以前往 https://console.cloud.google.com/cloud-resource-manager,選取您在步驟 2 建立的專案,然後選擇「刪除」。如果刪除專案,您必須變更 Cloud SDK 中的專案。您可以執行 gcloud projects list 來查看可用專案的清單。