Vertex AI Vision Traffic Monitoring App

1. Objectives

Overview

This codelab will focus on creating a Vertex AI Vision application end-to-end to monitor real-time traffic video footage. We will use the pretrained Specialized model Occupancy analytics' inbuilt features to capture the following things:

  • Count the number of vehicles and person crossing a road at a certain line.
  • Count the number of vehicles/person in any fixed region of the road.
  • Detecting congestion in any part of the road.

What you'll learn

  • How to setup a VM to ingest videos for streaming
  • How to create an application in Vertex AI Vision
  • Different features available in Occupancy Analytics and how to use them
  • How to deploy the app
  • How to search for videos in your storage Vertex AI Vision's Media Warehouse.
  • How to connect output to BigQuery, write SQL query to extract insights from the model's json output and visualize the result in Looker Studio in realtime.

2. Before You Begin

  1. In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project. Go to project selector
  2. Make sure that billing is enabled for your Cloud project. Learn how to check if billing is enabled on a project.
  3. Enable the Compute Engine and Vision AI APIs. Enable the APIs

Create a service account:

  1. In the Google Cloud console, go to the Create service account page. Go to Create service account
  2. Select your project.
  3. In the Service account name field, enter a name. The Google Cloud console fills in the Service account ID field based on this name. In the Service account description field, enter a description. For example, Service account for quickstart.
  4. Click Create and continue.
  5. To provide access to your project, grant the following role(s) to your service account: Vision AI > Vision AI Editor, Compute Engine > Compute Instance Admin (beta), Storage > Storage Object Viewer † . In the Select a role list, select a role. For additional roles, click Add another role and add each additional role. Note: The Role field affects which resources your service account can access in your project. You can revoke these roles or grant additional roles later. In production environments, do not grant the Owner, Editor, or Viewer roles. Instead, grant a predefined role or custom role that meets your needs.
  6. Click Continue.
  7. Click Done to finish creating the service account. Do not close your browser window. You will use it in the next step.

Create a service account key:

  1. In the Google Cloud console, click the email address for the service account that you created.
  2. Click Keys.
  3. Click Add key, and then click Create new key.
  4. Click Create. A JSON key file is downloaded to your computer.
  5. Click Close.
  6. Install and initialize the Google Cloud CLI.

† Role only needed if you copy a sample video file from a Cloud Storage bucket.

3. Set up a VM to stream video

Before creating an App in Occupancy Analytics, you must register a stream which can be used later by the App.

In this tutorial you create a Compute Engine VM instance that hosts a video, and you send that streaming video data from the VM.

Create a Linux VM

The first step in sending video from a Compute Engine VM instance is creating the VM instance.

  1. In the console, go to the VM instances page. Go to VM instances
  2. Select your project and click Continue.
  3. Click Create instance.
  4. Specify a Name for your VM. For more information, see Resource naming convention.
  5. Optional: Change the Zone for this VM. Compute Engine randomizes the list of zones within each region to encourage use across multiple zones.
  6. Accept the remaining default options. For more information about these options, see Create and start a VM.
  7. To create and start the VM, click Create.

Set up the VM environment

After the VM has started, you can use console to SSH into the VM from your browser. Then, you can download vaictl command-line tool to ingest video into your stream.

Establish an SSH connection to your VM

  1. In the console, go to the VM instances page. Go to VM instances
  2. In the Connect section of the instance line you created, click SSH. This opens an SSH connection in a new browser window. The SSH option in the UI

Download the vaictl command-line tool

  1. In the SSH-in-browser window, download the Vertex AI Vision (vaictl) command-line tool using the following command:
wget https://github.com/google/visionai/releases/download/v0.0.4/visionai_0.0-4_amd64.deb
  1. Install the command-line tool by running the following command:
sudo apt install ./visionai_0.0-4_amd64.deb
  1. You can test the installation by running the following command:
vaictl --help

4. Ingest a video file for streaming

After you set up your VM environment, you can copy a sample video file and then use vaictl to stream the video data to your occupancy analytics app.

Begin by activating the Vision AI API in the Cloud Console

Register a new stream

  1. Click streams tab on the left panel of Vertex AI Vision.
  2. Click on Register
  3. In the Stream name enter ‘traffic-stream'
  4. In region enter ‘us-central1'
  5. Click register

The stream will take a couple of minutes to get registered.

Copy a sample video to your VM

  1. In the SSH-in-browser window for your VM, copy a sample video with the following gsutil cp command. Replace the following variable:
  • SOURCE: The location of a video file to use. You can use your own video file source (for example, gs://BUCKET_NAME/FILENAME.mp4), or use the sample video (gs://cloud-samples-data/vertex-ai-vision/street_vehicles_people.mp4 )(video with people and vehicles, source)
export SOURCE=gs://cloud-samples-data/vertex-ai-vision/street_vehicles_people.mp4
gsutil cp $SOURCE .

Stream video from VM and ingest data into your stream

  1. To send this local video file to the app input stream, use the following command. You must make the following variable substitutions:
  • PROJECT_ID: Your Google Cloud project ID.
  • LOCATION_ID: Your location ID. For example, us-central1. For more information, see Cloud locations.
  • LOCAL_FILE: The filename of a local video file. For example, street_vehicles_people.mp4.
  • –loop flag: Optional. Loops file data to simulate streaming.
export PROJECT_ID=<Your Google Cloud project ID>
export LOCATION_ID=us-central1
export LOCAL_FILE=street_vehicles_people.mp4
  1. This command streams a video file to a stream. If you use the –loop flag, the video is looped into the stream until you stop the command. We will run this command as a background job so that it keeps streaming even after the VM gets disconnected.
  • ( add nohup at the beginning and ‘&' at the end to make it background job)
nohup vaictl -p $PROJECT_ID \
    -l $LOCATION_ID \
    -c application-cluster-0 \
    --service-endpoint visionai.googleapis.com \
send video-file to streams 'traffic-stream' --file-path $LOCAL_FILE --loop &

It might take ~100 seconds between starting the vaictl ingest operation and the video appearing in the dashboard.

After the stream ingestion is available, you can see the video feed in the Streams tab of the Vertex AI Vision dashboard by selecting the traffic-stream stream.

Go to the Streams tab

View of live video being streamed in UILive view of video being ingested into the stream in the Google Cloud console. Video credit: Elizabeth Mavor on Pixabay (pixelation added).

5. Create an application

The first step is to create an app that processes your data. An app can be thought of as an automated pipeline that connects the following:

  • Data ingestion: A video feed is ingested into a stream.
  • Data analysis: An AI(Computer Vision) model can be added after the ingestion.
  • Data storage: The two versions of the video feed (the original stream and the stream processed by the AI model) can be stored in a media warehouse.

In the Google Cloud console an app is represented as a graph.

Create an empty app

Before you can populate the app graph, you must first create an empty app.

Create an app in the Google Cloud console.

  1. Go to Google Cloud console.
  2. Open the Applications tab of the Vertex AI Vision dashboard.

Go to the Applications tab

  1. Click the add Create button.
  2. Enter traffic-app as the app name and choose your region.
  3. Click Create.

Add app component nodes

After you have created the empty application, you can then add the three nodes to the app graph:

  1. Ingestion node: The stream resource that ingests data sent from a Compute Engine VM instance you create.
  2. Processing node: The occupancy analytics model that acts on ingested data.
  3. Storage node: The media warehouse that stores processed videos, and serves as a metadata store. The metadata stores include analytics information about ingested video data, and inferred information by the AI models.

Add component nodes to your app in the console.

  1. Open the Applications tab of the Vertex AI Vision dashboard. Go to the Applications tab
  2. In the traffic-app line, select View graph. This takes you to the graph visualization of the processing pipeline.

Add a data ingestion node

  1. To add an input stream node, select the Streams option in the Connectors section of the side menu.
  2. In the Source section of the Stream menu that opens, select Add streams.
  3. In the Add streams menu, choose Register new streams and add traffic-stream as the stream name.
  4. To add the stream to the app graph, click Add streams.

Add a data processing node

  1. To add the occupancy count model node, select the occupancy analytics option in the Specialized models section of the side menu.
  2. Leave the default selections People and Vehicles.
  3. Add Lines in Line crossing. Use the multi point line tool to draw the lines where you need to detect cars or people leaving or entering.
  4. Draw the active zones to count people/vehicles in that zone.
  5. Add settings for dwell time to detect congestion if an active zone is drawn.
  • (currently active zone and line crossing both are not supported simultaneously. Use only one feature at a time.)

3acdb6f1e8474e07.png ce63449d601995e9.png

194c54d2bbcf7e8a.png

Add a data storage node

  1. To add the output destination (storage) node, select the Vertex AI Vision's Media Warehouse option in the Connectors section of the side menu.
  2. In the Vertex AI Vision's Media Warehouse menu, click Connect warehouse.
  3. In the Connect warehouse menu, select Create new warehouse. Name the warehouse traffic-warehouse, and leave the TTL duration at 14 days.
  4. Click the Create button to add the warehouse.

6. Connect Output to BigQuery Table

When you add a BigQuery connector to your Vertex AI Vision app all the connected app model outputs will be ingested to the target table.

You can either create your own BigQuery table and specify that table when you add a BigQuery connector to the app, or let the Vertex AI Vision app platform automatically create the table.

Automatic table creation

If you let Vertex AI Vision app platform automatically create the table, you can specify this option when you add the BigQuery connector node.

The following dataset and table conditions apply if you want to use automatic table creation:

  • Dataset: The automatically created dataset name is visionai_dataset.
  • Table: The automatically created table name is visionai_dataset.APPLICATION_ID.
  • Error handling:
  • If the table with the same name under the same dataset exists, no automatic creation happens.
  1. Open the Applications tab of the Vertex AI Vision dashboard. Go to the Applications tab
  2. Select View app next to the name of your application from the list.
  3. On the application builder page select BigQuery from the Connectors section.
  4. Leave the BigQuery path field empty. ee0b67d4ab2263d.png
  5. In the store metadata from: select only ‘occupancy Analytics' and uncheck streams.

The final app graph should look like this:

1787242465fd6da7.png

7. Deploy your app for use

After you have built your end-to-end app with all the necessary components, the last step to using the app is to deploy it.

  1. Open the Applications tab of the Vertex AI Vision dashboard. Go to the Applications tab
  2. Select View graph next to the traffic-app app in the list.
  3. From the application graph builder page, click the Deploy button.
  4. In the following confirmation dialog, select Deploy. The deploy operation might take several minutes to complete. After deployment finishes, green check marks appear next to the nodes. ee78bbf00e5db898.png

8. Search video content in the storage warehouse

After you ingest video data into your processing app, you can view analyzed video data, and search the data based on occupancy analytics information.

  1. Open the Warehouses tab of the Vertex AI Vision dashboard. Go to the Warehouses tab
  2. Find the traffic-warehouse warehouse in the list, and click View assets.
  3. In the People count or Vehicle count section, set the Min value to 1, and the Max value to 5.
  4. To filter processed video data stored in Vertex AI Vision's Media Warehouse, click Search.

e636361b19738c8d.png

A view of stored video data that matches search criteria in the Google Cloud console. Video credit: Elizabeth Mavor on Pixabay (search criteria applied).

9. Analyze Output in BigQuery Table

Go to BigQuery

Select the dataset: visionai_dataset

Select the table: your APPLICATION_ID (in this case traffic-app)

Click on three dots right to the table name and click on Query

Write the following query

Query1: Query to check Vehicle Count crossing each line per minute

abc.sql

- Get list of active marked lines for each timeframe
WITH line_array AS (
  SELECT
  t.ingestion_time AS ingestion_time,
  JSON_QUERY_ARRAY(t.annotation.stats["crossingLineCounts"]) AS lines
  FROM
  `PROJ_ID.visionai_dataset.APP_ID` AS t
),
- Flatten active lines to get individual entities details
flattened AS (
  SELECT
  line_array.ingestion_time,
  JSON_VALUE(line.annotation.id) as line_id,
  JSON_QUERY_ARRAY(line["positiveDirectionCounts"]) AS entities
  FROM line_array, unnest(line_array.lines) as line
 )
- Generate aggregate vehicle count per zone w.r.t time 
SELECT
  STRING(TIMESTAMP_TRUNC(ingestion_time, MINUTE) ) AS time, line_id,
  SUM(INT64(entity["count"])) as vehicle_count
FROM
  flattened, UNNEST(flattened.entities) AS entity
WHERE JSON_VALUE(entity['entity']['labelString']) = 'Vehicle'
GROUP BY time, line_id

Query2: Query to check Vehicle Count per minute in each Zones

- Get list of active zones for each timeframe
WITH zone_array AS (
     SELECT
     t.ingestion_time AS ingestion_time,
     JSON_QUERY_ARRAY(t.annotation.stats["activeZoneCounts"]) AS zones
     FROM
     `PROJ_ID.visionai_dataset.APP_ID` AS t
),
- Flatten active zones to get individual entities details
flattened AS (
   SELECT zone_array.ingestion_time, JSON_VALUE(zone.annotation.id) as zone_id,
         JSON_QUERY_ARRAY(zone["counts"]) AS entities
   FROM zone_array, unnest(zone_array.zones) as zone
 )
- Generate aggregate vehicle count per zone w.r.t time 
SELECT 
STRING(TIMESTAMP_TRUNC(ingestion_time, MINUTE) ) AS time, 
zone_id,
SUM(INT64(entity["count"])) as vehicle_count
FROM flattened, UNNEST(flattened.entities) AS entity
WHERE JSON_VALUE(entity['entity']['labelString']) = 'Vehicle'
GROUP BY time, zone_id

In the above queries you can change "Vehicle" to "Person" to count Person.

This codelab will show the sample data and visualization for Query1 only. You can follow the similar process for Query2.

e6fd891f3a46246.png

Click on Explore Data in the right side menu and select explore with Looker Studio

9e737ddb4d0d25b6.png

In the ‘dimension' pane add time and change the time configuration to date-time. In ‘breakdown dimension' add line_id. b5da9704ccd8db.png

The above graph shows the count of vehicles/person crossing each line per minute.

The deep blue and light blue bars indicate the two different line-ids.

10. Congratulations

Congratulations, you finished the lab!

Clean up

To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources.

Delete the project

Delete individual resources

Resources

https://cloud.google.com/vision-ai/docs/overview

https://cloud.google.com/vision-ai/docs/occupancy-count-tutorial

Feedback

Click here to Provide Feedback

Survey

How will you use this tutorial?

Read it through only Read it and complete the exercises

How useful was this codelab?

Very useful Moderate useful